<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Nabarun Pal</title><link>https://nabarun.dev/</link><description>Posts by Nabarun Pal</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>hey@nabarun.dev</managingEditor><webMaster>hey@nabarun.dev</webMaster><lastBuildDate>Mon, 06 Jan 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://nabarun.dev/index.xml" rel="self" type="application/rss+xml"/><item><title>About</title><link>https://nabarun.dev/about/</link><pubDate>Mon, 06 Jan 2025 00:00:00 +0000</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/about/</guid><description>&lt;p>I&amp;rsquo;m Nabarun Pal, also known as &lt;code>palnabarun&lt;/code> or &lt;code>theonlynabarun&lt;/code>, a distributed systems engineer and open source contributor with a passion for building resilient infrastructure and fostering collaborative communities. Currently, I work on Kubernetes and cloud-native technologies, contributing to the ecosystem that powers modern distributed applications.&lt;/p>
&lt;p>When I&amp;rsquo;m not deep in code or community discussions, you can find me planning my next adventure, brewing different coffee concoctions, tweaking &lt;a href="https://nabarun.dev/setup">my homelab setup&lt;/a>, or exploring new &lt;a href="https://nabarun.dev/setup/keyboards/">mechanical keyboards&lt;/a>. I believe in the power of open source to democratize technology and create opportunities for everyone to contribute and learn.&lt;/p>
&lt;p>A detailed view of my speaking engagements are in the &lt;a href="https://nabarun.dev/speaking">/speaking&lt;/a> page.&lt;/p></description></item><item><title>What's the plan for 2022?</title><link>https://nabarun.dev/posts/whats-the-plan-for-2022/</link><pubDate>Mon, 03 Jan 2022 21:32:11 +0530</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/whats-the-plan-for-2022/</guid><description>&lt;p>I want to spend more time in 2022 to &amp;ldquo;Do Less!&amp;rdquo;, take things as they come, take breaks and try to travel or work on mechanical keyboards during those breaks. The time off that I took in 2021 - first half of August and last half of December, gave me a lot of breathing space and time to rethink priorities especially the first one where I partially overcame burnout due to several factors (Thanks to VMware for the generous leaves!).&lt;/p>
&lt;p>Last year, I started actively taking care of my health. I jogged for ~700kms in the last quarter (Oct-Dec) although I did not jog during the time I was travelling to Bangalore/Delhi/Kolkata. I want to continue the same trend and target atleast 3000kms of jogging and a 10km run in 2022.&lt;/p>
&lt;p>2021 was also when I moved back to my hometown, Agartala, and that too after a span of 9 years. I spent a lot of time with close family members and friends from school. I plan to spend more time with people who I care about and who care about me, be it in Bangalore or Agartala.&lt;/p></description></item><item><title>Update: Weekly Mentoring Sessions</title><link>https://nabarun.dev/posts/weekly-mentoring-sessions-update/</link><pubDate>Sat, 27 Nov 2021 05:25:34 +0530</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/weekly-mentoring-sessions-update/</guid><description>&lt;p>I have had the pleasure to talk with 30+ folks and help them in their journey in the field of computer science and/or growing in their career with Open Source Software. It has been an honour that so many wanted to talk to me and get my views.&lt;/p>
&lt;p>For the month of December, I am going to take a break from the mentoring sessions as I will travelling on most weekends and will be out for vacation in the later half of the month.&lt;/p>
&lt;p>Fret not, I will try to make up for the lost time by doubling up my commitment for January 2022. But, in case you need to urgently talk with me, drop me a ping on &lt;code>hey [at] nabarun [dot] dev&lt;/code> and I will try to schedule something which works for both of us.&lt;/p>
&lt;p>Wish you all a very happy December! &amp;#x1f389;&lt;/p>
&lt;p>PS: Stay tuned to the &lt;a href="//nabarun.dev/index.xml">RSS feed&lt;/a>! There are many articles which are languishing in my drafts, I may publish a few of them.&lt;/p></description></item><item><title>Giving Back</title><link>https://nabarun.dev/posts/giving-back/</link><pubDate>Sun, 18 Jul 2021 12:40:50 +0530</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/giving-back/</guid><description>&lt;blockquote>
&lt;p>&lt;strong>Update (2024):&lt;/strong> 1-on-1 mentorship sessions are currently paused. Please reach out via the &lt;a href="https://nabarun.dev/contact">contact page&lt;/a> instead.&lt;/p>&lt;/blockquote>
&lt;p>Over time, I have learnt a lot from the open-source software community, not just in India but from various parts of the world. I have the privilege of learning from the best software engineers in this world. A lot of great mentors have influenced me and helped me in shaping my way into OSS. These interactions were valuable to me, being a self-taught software engineer.&lt;/p>
&lt;p>While talking to one junior from my alma mater, I realized there is still a gap between people wanting to get mentored and good mentors in the OSS community. There are outstanding programs like &lt;a href="//foss.training">DGPLUG Summer Training&lt;/a> which help new contributors learn how to survive in the open-source community by covering in breadth the mechanics. Still, people usually don&amp;rsquo;t discover on their own. I understand how frustrating it is to get stuck when you are exploring something. Having a mentor/partner with whom you can discuss doubts is vital in that situation.&lt;/p>
&lt;p>As a way of giving back to the community that I have learned so much from, I am pledging 4 hours per week of my time to talk with folks who seek mentorship in OSS, Career journey, or, in general, talk about anything common interests.&lt;/p>
&lt;p>Just go to &lt;a href="https://calendly.com/palnabarun/1-on-1">https://calendly.com/palnabarun/1-on-1&lt;/a> to schedule a slot.&lt;/p></description></item><item><title>My journey in the Kubernetes Release Team: Part 1</title><link>https://nabarun.dev/posts/kubernetes-release-team-journey-pt1/</link><pubDate>Thu, 10 Sep 2020 08:58:41 +0530</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/kubernetes-release-team-journey-pt1/</guid><description>&lt;p>During this period last year, I got interested in how a new Kubernetes version is released and what goes on behind it. After some searching, I found that all of the process and the roles are well documented in the &lt;a href="https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks">Release Team Role Handbooks&lt;/a>.&lt;/p>
&lt;p>&lt;img src="https://nabarun.dev/images/rt/rt-handbooks.png" alt="rt-handbooks-repo">&lt;/p>
&lt;p>I read through all of them to understand the process, why there are so many roles and what responsibilities each team is entrusted with. All of that sounded pretty interesting to me. All the teams did an amazing work and were equally crucial for the well functioning of a Kubernetes Release cycle. I got specifically interested with the Enhancements and CI Signal teams. I started to dig how can I lend my hand to the efforts.&lt;/p>
&lt;h2 id="shadow-roles">Shadow Roles&lt;/h2>
&lt;p>With the role handbooks, I got to know of the &lt;a href="https://github.com/kubernetes/sig-release/blob/master/release-team/shadows.md">Release Team Shadow Program&lt;/a> which is aimed at mentoring new contributors and training them to be the next leads of the Kubernetes Release Team. The shadows are expected to learn from the leads and fill in wherever necessary. You can think of these positions as &amp;ldquo;trainee/intern&amp;rdquo; roles at your workplace. This was just a primer on the program. You can read more on the link.&lt;/p>
&lt;blockquote>
&lt;p>Okay. I know where to start with. &lt;strong>But how?&lt;/strong>&lt;/p>&lt;/blockquote>
&lt;p>Turns out it requires a &lt;strong>marginal amount of effort&lt;/strong>, &lt;strong>bucket loads of curiosity&lt;/strong> and &lt;strong>time commitment&lt;/strong> to apply for the Shadow Program. The Release Team at the start of every release cycle pushes out a public form for inviting applications to the shadow roles.&lt;/p>
&lt;h2 id="taking-the-plunge">Taking the plunge&lt;/h2>
&lt;p>I took the initiative and filled the form with my interests and thoughts. And few days later I, along with other shadows, was welcomed by &lt;a href="https://twitter.com/MrBobbyTables">MrBobbyTables&lt;/a> to my first involvement with the Release Team. &amp;#x1f389;&lt;/p>
&lt;p>&lt;img src="https://nabarun.dev/images/rt/1.17-intro.png" alt="1.17 Introduction">&lt;/p>
&lt;p>The next few months were just like a roller coaster ride. The team that I was shadowing for was the Enhancements Team and our work was to shepherd features for the Kubernetes release and maintain the status of &lt;a href="https://github.com/kubernetes/enhancements">Kubernetes Enhancements Proposals&lt;/a>, aka, KEP(s).&lt;/p>
&lt;p>The role involved understanding each outstanding KEP, pinging respective OWNERS if the enhancement would be graduating in the current release cycle, and keeping track whether the enhancements are satisfying the requirements for the release.&lt;/p>
&lt;h3 id="here-were-a-few-takeaways-that-i-took-while-working-on-the-team">Here were a few takeaways that I took while working on the team:&lt;/h3>
&lt;ul>
&lt;li>Knowledge of what is involved when adding a feature into Kubernetes!!!&lt;/li>
&lt;li>Reading through an enourmous amount of KEPs, I got to know about the features themselves&lt;/li>
&lt;li>Communicating effectively with others and breaking the ice&lt;/li>
&lt;li>A lot of GitHub triage skills and tricks&lt;/li>
&lt;li>Tricks of wrangling data on a spreadsheet &amp;#x1f609;&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>I will write about the complete lifecycle of a Kubernetes Enhancement Proposal in a future article.&lt;/p>&lt;/blockquote>
&lt;p>We released &lt;code>Kubernetes 1.17 : The Chillest Release&lt;/code> (Yes! That is the release theme &amp;#x1f603;) after all the efforts of the &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.17/release_team.md">1.17 release team&lt;/a>. The last release of the year is usually the most chilled out and is a bit short due to the December vacation.&lt;/p>
&lt;p>The whole team became akin to a family for me spending all that effort into ensuring a smooth release. We, the Enhancements Team (&lt;a href="https://twitter.com/MrBobbyTables">Bob&lt;/a>, &lt;a href="https://twitter.com/jrrickard">Jeremy&lt;/a>, &lt;a href="https://twitter.com/antheajung">Anna&lt;/a>, &lt;a href="https://twitter.com/KristinCMartin">Kristin&lt;/a>), even got together at KubeCon San Diego to meet up physically.&lt;/p>
&lt;p>&lt;img src="https://nabarun.dev/images/rt/1.17-rt-meet.jpg" alt="1.17-rt-meet">&lt;/p>
&lt;h2 id="the-next-steps">The Next Steps&lt;/h2>
&lt;p>After Kubernetes 1.17, I signed up again for the Kubernetes 1.18 team for Enhancements to get more exposure to the KEP landscape. This time &lt;a href="https://twitter.com/jrrickard">Jeremy&lt;/a> was leading the Enhancements Team. It was fun again to work with the enhancements team (&lt;a href="https://twitter.com/jrrickard">Jeremy&lt;/a>, &lt;a href="https://github.com/kikisdeliveryservice">Kirsten&lt;/a>, &lt;a href="https://twitter.com/helayoty">Heba&lt;/a>, &lt;a href="https://twitter.com/johnbelamaric">John&lt;/a>), all the same just this time there were new shadows along with me compared to last time.&lt;/p>
&lt;p>This release cycle was mostly the same for me other than I was a bit more involved than the last time having served on the team previously. And, after all the hard work of the &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.18/release_team.md">1.18 release team&lt;/a>, we were treated to a &lt;strong>quarky&lt;/strong> Kubernetes 1.18 &amp;#x2b50;.&lt;/p>
&lt;h2 id="graduating-to-be-the-enhancements-lead-rocket">Graduating to be the Enhancements Lead &amp;#x1f680;&lt;/h2>
&lt;p>Aaaaaand after a splendid 1.18, &lt;a href="https://twitter.com/jrrickard">Jeremy&lt;/a> nominated me to be the Enhancements Lead for Kubernetes 1.19 Release Team. I was stoked to get the opportunity and at the same time scared if I can do full justice to the responsibility bestowed upon me. The knowledge of the role while working with &lt;a href="https://twitter.com/MrBobbyTables">Bob&lt;/a> and &lt;a href="https://twitter.com/jrrickard">Jeremy&lt;/a> in the previous release teams gave me the confidence that I can fulfill the responsibilities of the role.&lt;/p>
&lt;p>&lt;a href="https://github.com/kubernetes/sig-release/issues/1031">&lt;img src="https://nabarun.dev/images/rt/1.19-nomination.png" alt="1.19 Nomination">&lt;/a>&lt;/p>
&lt;p>This release cycle eventually became special in many ways. We were hit by a deadly pandemic which changed a lot of things in our life. The release cycle was extended to 5 months instead of the usual 12 weeks cadence. The pandemic and various other factors shaved off quite a bit of the usual bandwidth the community had previously. These times were very crucial for the whole world and the team didn&amp;rsquo;t want to put undue additional pressure on the amazing contributors that we have.&lt;/p>
&lt;p>The shadows that I selected for the Enhancements Team spanned 12 and a half hours of timezone and created an amazing round the earth coverage for the team. This meant no team member had to toil in their odd times of the day. I take this opportunity to &lt;strong>thank again all the enhancements shadows - &lt;a href="https://github.com/kikisdeliveryservice">Kirsten&lt;/a>, &lt;a href="https://twitter.com/NeerDoseMonster">Harsha&lt;/a>, &lt;a href="https://github.com/msedzins">Miroslaw&lt;/a> and &lt;a href="https://twitter.com/johnbelamaric">John&lt;/a> for their efforts even in such hard times&lt;/strong>.&lt;/p>
&lt;p>After those tense &amp;amp; tough 5 months firefighting a lot of issues, the &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.19/release_team.md">1.19 release team&lt;/a> released &lt;code>Kubernetes 1.19: Accentuate the Paw-sitive&lt;/code>. You can read about the release on the &lt;a href="https://kubernetes.io/blog/2020/08/26/kubernetes-release-1.19-accentuate-the-paw-sitive/">Kubernetes Blog&lt;/a> and the upcoming &lt;a href="https://www.cncf.io/webinars/kubernetes-1-19/">Kubernetes 1.19 Release Webinar&lt;/a> where I will be presenting along with the 1.19 Release Lead &lt;a href="https://twitter.com/onlydole">Taylor&lt;/a> and the 1.19 Communications Lead &lt;a href="https://twitter.com/mkoerbi">Max&lt;/a>.&lt;/p>
&lt;p>With that, my watch ended over the Release Enhancements Team and it was time to hand over the baton to next lead. :peace:&lt;/p>
&lt;p>I was very happy to nominate &lt;a href="https://github.com/kikisdeliveryservice">Kirsten&lt;/a> to succeed me as the next Enhancements Lead of the Release Team. &amp;#x1f389;&lt;/p>
&lt;p>&lt;a href="https://github.com/kubernetes/sig-release/issues/1185">&lt;img src="https://nabarun.dev/images/rt/1.20-nomination.png" alt="1.20 Nomination">&lt;/a>&lt;/p>
&lt;p>Along with that it was time for me to graduate to my next role. I look forward to working with &lt;a href="https://twitter.com/jrrickard">Jeremy&lt;/a>, &lt;a href="https://twitter.com/coffeeartgirl">Savitha&lt;/a> and &lt;a href="https://twitter.com/hasheddan">Daniel&lt;/a> for Kubernetes 1.20. I will be shadowing the Release Lead for Kubernetes 1.20. &amp;#x1f60e;&lt;/p>
&lt;p>&lt;a href="https://github.com/kubernetes/sig-release/issues/1201">&lt;img src="https://nabarun.dev/images/rt/1.20-onboarding.png" alt="1.20 Onboarding">&lt;/a>&lt;/p>
&lt;h2 id="how-can-you-get-involved-raised_hands">How can you get involved? &amp;#x1f64c;&lt;/h2>
&lt;p>The last release of the year, Kubernetes 1.20, is going to be published in December. The Release Team is looking out for folks for the shadow roles. All you need to do is go ahead, read the &lt;a href="https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks">role handbooks&lt;/a>, figure out which role interests you the best and then fill the &lt;a href="https://forms.gle/58jyAeewYGJNbsVZA">form&lt;/a>. We ask every prospect to fill the form because we want to know if the release team would be a good fit for you and to find the right role for you in the Release Team.&lt;/p>
&lt;p>The applications will be open until &lt;em>End of Day Friday, September 11, 2020 Pacific Time&lt;/em>.&lt;/p>
&lt;h2 id="still-in-doubt-mag">Still in doubt? &amp;#x1f50d;&lt;/h2>
&lt;p>I would say just go ahead and volunteer for the shadow roles. &amp;#x1f6a2;&lt;/p>
&lt;p>Feel free to contact me on Twitter at &lt;a href="https://twitter.com/theonlynabarun">@theonlynabarun&lt;/a>, or on the &lt;a href="https://slack.k8s.io">Kubernetes Slack&lt;/a>, in case you have anything to ask.&lt;/p>
&lt;h2 id="quick-references">Quick References&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://github.com/kubernetes/sig-release/blob/master/release-team/release-team-selection.md">The selection process&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes/sig-release/blob/master/release-team/shadows.md">Shadow overview&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks">Role Handbooks&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="update-since-the-original-version">Update since the original version&lt;/h3>
&lt;p>I wrote this article over 6 months back and since then led the &lt;a href="https://github.com/kubernetes/sig-release/blob/master/releases/release-1.21/release-team.md">Kubernetes 1.21 Release Team&lt;/a>. I plan to write about my experience leading the Release Team in the near future. Do subscribe to the &lt;a href="https://nabarun.dev/index.xml">RSS feed&lt;/a> for updates.&lt;/p></description></item><item><title>Rubber Ducks: My trusted companions</title><link>https://nabarun.dev/posts/rubber-ducks/</link><pubDate>Sat, 22 Aug 2020 23:25:06 +0530</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/rubber-ducks/</guid><description>&lt;p>There are times when I find myself stuck when solving any problem. This deadlock can arise due to several factors. Sometimes I need a new perspective of the problem. Sometimes I just need to go through my approach with a fresh mind.&lt;/p>
&lt;p>I can ping my colleagues to get new perspectives or explain to them what I am trying to achieve. But, not always can I find someone to listen to me since everyone is busy with their work.&lt;/p>
&lt;p>What I do in those situations is either&lt;/p>
&lt;ul>
&lt;li>Write down my approach on a piece of paper in the simplest terms, or&lt;/li>
&lt;li>Talk to my &lt;strong>Rubber Ducks&lt;/strong> about the approach assuming that the ducks have ZERO knowledge about what I am doing.&lt;/li>
&lt;/ul>
&lt;p>I learned about the &lt;strong>Rubber Duck&lt;/strong> debugging paradigm when reading &lt;a href="https://pragprog.com/titles/tpp20/the-pragmatic-programmer-20th-anniversary-edition/">The Pragmatic Programmer&lt;/a> by Andrew Hunt and David Thomas. It is a gem of a book. I feel every software engineer looking to excel in their art should read the book. They discuss this method in the Debugging chapter of the book. The idea to find the cause of a problem by explaining it in very simple terms to someone else. The other person listening to you shouldn&amp;rsquo;t speak a word and should just nod to what you are saying. This simple exercise of explaining your approach in very well defined and atomic steps can give you new insights to your problem.&lt;/p>
&lt;p>Obviously you can&amp;rsquo;t have someone with you all the time to just listen to you. And here comes the importance of inanimate objects that can&amp;rsquo;t speak. They are the ideal ones that you can explain your problem to. There are some positive effects like they won&amp;rsquo;t ever judge you, they are always with you no matter what happens. Having said all that, I want to introduce you to my rubber ducks:&lt;/p>
&lt;p>&lt;img src="https://nabarun.dev/images/rubber-ducks.jpg" alt="My rubber ducks">&lt;/p>
&lt;p>In the front row is Goldie, then Zee, then Captain Kube and the tall bloke is Phippy. The whole gang is known as &lt;a href="https://phippy.io">Phippy and Friends&lt;/a>. They are always on my table looking over me and listen to me whenever I want to speak to them. I know it&amp;rsquo;s a bit intimidating someone looking at you all the time, but eventually you get along with them. &amp;#x1f609;&lt;/p>
&lt;p>&lt;em>Do note that this method works for me, but not necessarily would it work for everyone. I always ask people to find their own debugging comfort zone.&lt;/em>&lt;/p></description></item><item><title>Running Tor Proxy with Docker</title><link>https://nabarun.dev/posts/running-tor-proxy-with-docker/</link><pubDate>Sun, 05 Jul 2020 15:45:06 +0530</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/running-tor-proxy-with-docker/</guid><description>&lt;p>Today I was testing &lt;a href="https://github.com/kushaldas/dns-tor-proxy">dns-tor-proxy&lt;/a> which required a SOCKS5 Tor proxy and realized I never ran a Tor service on my current machine. I use &lt;a href="https://www.torproject.org/">Tor browser&lt;/a> almost daily for browsing websites I have absolutely no trust on, but not the standalone Tor proxy. In this article, I will try to set one up using the system package as well as inside a Docker container.&lt;/p>
&lt;h2 id="what-is-a-tor-proxy">What is a Tor proxy?&lt;/h2>
&lt;p>A Tor proxy is a SOCKS5 proxy which routes your traffic through the Tor network. The Tor network ensures that any traffic originating from inside the network gets routed through atleast 3 random relays before exiting through the exit node.&lt;/p>
&lt;p>It helps you to anonymize traffic, block trackers and, prevent surveillance amongst other benefits. If you are wondering who should use Tor, the answer is every person who cares about their privacy. You can read more about the architecture &lt;a href="https://2019.www.torproject.org/about/overview.html.en#thesolution">here&lt;/a>.&lt;/p>
&lt;h2 id="arch-linux">Arch Linux&lt;/h2>
&lt;p>Tor is available in the Arch package repository and can be simply installed by:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#75715e"># Install Tor&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>$ sudo pacman -S tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>...
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#75715e"># Start the service&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>$ sudo systemctl start tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#75715e"># Check whether the service is running&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>$ sudo netstat -tunlp | grep tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>tcp &lt;span style="color:#ae81ff">0&lt;/span> &lt;span style="color:#ae81ff">0&lt;/span> 127.0.0.1:9050 0.0.0.0:* LISTEN 3808529/tor
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>We see above that installing &lt;code>tor&lt;/code> through &lt;code>pacman&lt;/code> set up the systemd service as well. Jump to &lt;a href="#using-the-proxy">Using the proxy&lt;/a> for the demo.&lt;/p>
&lt;h2 id="debianubuntu">Debian/Ubuntu&lt;/h2>
&lt;p>The packages in the Debian ecosystem are often outdated. To get the latest version, one almost always needs to add third-party package repositories. I am not going into detail how to install Tor in that ecosystem, since there are a &lt;strong>lot&lt;/strong> of distribution/version combinations. The steps are well detailed in the official Tor installation &lt;a href="https://2019.www.torproject.org/docs/debian.html.en">docs&lt;/a>.&lt;/p>
&lt;h2 id="docker">Docker&lt;/h2>
&lt;p>We will be building a very lightweight Docker image to reduce footprint.&lt;/p>
&lt;p>Let&amp;rsquo;s start with the Tor configuration,&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>SocksPort 0.0.0.0:9050
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The above should get one started with the defaults. Feel free to change the port to whatever you like. The address being listened should be &lt;code>0.0.0.0&lt;/code> as we would be accessing the server from outside the docker container.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-dockerfile" data-lang="dockerfile">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#75715e"># set alpine as the base image of the Dockerfile&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#66d9ef">FROM&lt;/span>&lt;span style="color:#e6db74"> alpine:latest&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#75715e"># update the package repository and install Tor&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#66d9ef">RUN&lt;/span> apk update &lt;span style="color:#f92672">&amp;amp;&amp;amp;&lt;/span> apk add tor&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#75715e"># Copy over the torrc created above and set the owner to `tor`&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#66d9ef">COPY&lt;/span> torrc /etc/tor/torrc&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#66d9ef">RUN&lt;/span> chown -R tor /etc/tor&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#75715e"># Set `tor` as the default user during the container runtime&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#66d9ef">USER&lt;/span>&lt;span style="color:#e6db74"> tor&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#75715e"># Set `tor` as the entrypoint for the image&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#66d9ef">ENTRYPOINT&lt;/span> [&lt;span style="color:#e6db74">&amp;#34;tor&amp;#34;&lt;/span>]&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#75715e"># Set the default container command&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#75715e"># This can be overridden later when running a container&lt;/span>&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#960050;background-color:#1e0010">&lt;/span>&lt;span style="color:#66d9ef">CMD&lt;/span> [&lt;span style="color:#e6db74">&amp;#34;-f&amp;#34;&lt;/span>, &lt;span style="color:#e6db74">&amp;#34;/etc/tor/torrc&amp;#34;&lt;/span>]&lt;span style="color:#960050;background-color:#1e0010">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Let&amp;rsquo;s build the image now.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ docker build -t palnabarun/tor .
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Sending build context to Docker daemon 67.58kB
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Step 1/6 : FROM alpine:latest
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; a24bb4013296
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Step 2/6 : RUN apk update &lt;span style="color:#f92672">&amp;amp;&amp;amp;&lt;/span> apk add tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; Using cache
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; a5ea632ba987
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Step 3/6 : COPY torrc /etc/tor/torrc
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; 5b351b9847bc
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Step 4/6 : RUN chown -R tor /etc/tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; Running in 1f6950f03475
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Removing intermediate container 1f6950f03475
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; 060ded5c532c
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Step 5/6 : USER tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; Running in aa0553be76dc
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Removing intermediate container aa0553be76dc
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; d763c1181285
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Step 6/6 : ENTRYPOINT &lt;span style="color:#f92672">[&lt;/span>&lt;span style="color:#e6db74">&amp;#34;tor&amp;#34;&lt;/span>&lt;span style="color:#f92672">]&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; Running in 97fd7f9ee693
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Removing intermediate container 97fd7f9ee693
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ---&amp;gt; 13c889f5b018
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Successfully built 13c889f5b018
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Successfully tagged palnabarun/tor:latest
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>You might also be wondering what is the image size?&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ docker image ls | grep palnabarun/tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>palnabarun/tor latest 13c889f5b018 About a minute ago 21.1MB
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;blockquote>
&lt;p>The image is just a mere 21.1MB. Building docker images using &lt;a href="https://alpinelinux.org/">Alpine Linux&lt;/a> as base results in a very lightweight image.&lt;/p>&lt;/blockquote>
&lt;p>Let&amp;rsquo;s run the proxy.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ docker run &lt;span style="color:#ae81ff">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ae81ff">&lt;/span> --rm &lt;span style="color:#ae81ff">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ae81ff">&lt;/span> --detach &lt;span style="color:#ae81ff">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ae81ff">&lt;/span> --name tor &lt;span style="color:#ae81ff">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#ae81ff">&lt;/span> --publish 9050:9050 &lt;span style="color:#ae81ff">\ &lt;/span>&lt;span style="color:#75715e"># change the port to whatever you put in the torrc&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> palnabarun/tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>aef03d84628b
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>$ docker ps | grep tor
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>aef03d84628b palnabarun/tor &lt;span style="color:#e6db74">&amp;#34;tor&amp;#34;&lt;/span> &lt;span style="color:#ae81ff">31&lt;/span> seconds ago Up &lt;span style="color:#ae81ff">30&lt;/span> seconds 0.0.0.0:9050-&amp;gt;9050/tcp tor
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>After sometime the Tor proxy will succesfully establish a Tor circuit and it will be ready to use.&lt;/p>
&lt;p>The Tor config and Dockerfile can be found &lt;a href="https://github.com/palnabarun/tor-docker">here&lt;/a> and there is a ready to consume image on &lt;a href="https://hub.docker.com/r/palnabarun/tor">Docker Hub&lt;/a>&lt;/p>
&lt;h2 id="using-the-proxy">Using the proxy&lt;/h2>
&lt;p>Let&amp;rsquo;s test whether the proxy is working correctly by some simple &lt;code>curl&lt;/code> calls.&lt;/p>
&lt;p>The request below is not going through the proxy and hence would show your ISP provided IP address.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ curl https://check.torproject.org/api/ip
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f92672">{&lt;/span>&lt;span style="color:#e6db74">&amp;#34;IsTor&amp;#34;&lt;/span>:false,&lt;span style="color:#e6db74">&amp;#34;IP&amp;#34;&lt;/span>:&lt;span style="color:#e6db74">&amp;#34;49.30.XX.XX&amp;#34;&lt;/span>&lt;span style="color:#f92672">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Now, if we specify the Tor proxy when making the request, the IP address would be different.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ curl --socks5 127.0.0.1:9050 https://check.torproject.org/api/ip
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f92672">{&lt;/span>&lt;span style="color:#e6db74">&amp;#34;IsTor&amp;#34;&lt;/span>:true,&lt;span style="color:#e6db74">&amp;#34;IP&amp;#34;&lt;/span>:&lt;span style="color:#e6db74">&amp;#34;185.220.XXX.XXX&amp;#34;&lt;/span>&lt;span style="color:#f92672">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Also, notice the value of &lt;code>IsTor&lt;/code> in both the cases, the service running at &lt;code>check.torproject.org&lt;/code> knows whether the traffic was routed through the Tor network.&lt;/p>
&lt;p>The very same proxy can be used in your browser by going to the Network Settings and changing to a manual proxy configuration. I, however, highly recommend to use the &lt;a href="https://www.torproject.org/">Tor browser&lt;/a> if you just want to browse the internet through Tor.&lt;/p>
&lt;blockquote>
&lt;p>Note: The IP addresses are partially redacted for privacy reasons.&lt;/p>&lt;/blockquote>
&lt;hr>
&lt;p>If you are like me who cherishes reading RFCs, check out the following links&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://svn-archive.torproject.org/svn/projects/design-paper/tor-design.pdf">The original Tor design&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.txt">Tor v3 onion services protocol&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>It's always DNS!</title><link>https://nabarun.dev/posts/its-always-dns/</link><pubDate>Tue, 30 Jun 2020 00:00:00 +0000</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/its-always-dns/</guid><description>&lt;h2 id="context">Context&lt;/h2>
&lt;p>I was running &lt;a href="https://airflow.apache.org">Airflow&lt;/a> inside a Kubernetes cluster but the Airflow pods were not able to connect with the PostgreSQL database running inside the cluster. The following was consistently seen in the Airflow logs, although the &lt;code>postgres-airflow&lt;/code> service was up and running.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-text" data-lang="text">&lt;span style="display:flex;">&lt;span>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>name &amp;#34;postgres-airflow&amp;#34; to address: Temporary failure in name resolution
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>For the rest of this post, we will assume that all the user run components inside the cluster are running perfectly and focus on what is causing the name resolution errors.&lt;/p>
&lt;h2 id="the-whole-story">The whole story&lt;/h2>
&lt;p>I use &lt;a href="https://kind.sigs.k8s.io">kind&lt;/a> for testing and playing around with Kubernetes workloads. I spin up clusters with any specific Kubernetes version as and when needed. The Kubernetes cluster I spun up was running Kubernetes 1.15.7 using the following KinD configuration.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f92672">kind&lt;/span>: &lt;span style="color:#ae81ff">Cluster&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f92672">apiVersion&lt;/span>: &lt;span style="color:#ae81ff">kind.x-k8s.io/v1alpha4&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#f92672">nodes&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>- &lt;span style="color:#f92672">role&lt;/span>: &lt;span style="color:#ae81ff">control-plane&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#f92672">image&lt;/span>: &lt;span style="color:#ae81ff">kindest/node:v1.15.7@sha256:e2df133f80ef633c53c0200114fce2ed5e1f6947477dbc83261a6a921169488d&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>- &lt;span style="color:#f92672">role&lt;/span>: &lt;span style="color:#ae81ff">worker&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#f92672">image&lt;/span>: &lt;span style="color:#ae81ff">kindest/node:v1.15.7@sha256:e2df133f80ef633c53c0200114fce2ed5e1f6947477dbc83261a6a921169488d&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>- &lt;span style="color:#f92672">role&lt;/span>: &lt;span style="color:#ae81ff">worker&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#f92672">image&lt;/span>: &lt;span style="color:#ae81ff">kindest/node:v1.15.7@sha256:e2df133f80ef633c53c0200114fce2ed5e1f6947477dbc83261a6a921169488d&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;blockquote>
&lt;p>I know that this is a deprecated version on Kubernetes. But, since the workloads I am testing will be deployed on GKE, I need to get to the closest upstream configuration.&lt;/p>&lt;/blockquote>
&lt;p>Digging deeper using &lt;code>dig&lt;/code> (Yes! That was intentional), I found that pods inside the cluster were not able to communicate with each other using the Kubernetes Service Discovery and even to the outside world.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>debug ~ $ dig postgres-airflow.default.svc.cluster.local
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>; &amp;lt;&amp;lt;&amp;gt;&amp;gt; DiG 9.11.5-P4-5.1+deb10u1-Debian &amp;lt;&amp;lt;&amp;gt;&amp;gt; postgres-airflow.default.svc.cluster.local
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>;; global options: +cmd
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>;; connection timed out; no servers could be reached
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>debug ~ $ dig naba.run
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>; &amp;lt;&amp;lt;&amp;gt;&amp;gt; DiG 9.11.5-P4-5.1+deb10u1-Debian &amp;lt;&amp;lt;&amp;gt;&amp;gt; naba.run
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>;; global options: +cmd
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>;; connection timed out; no servers could be reached
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Pinging the pod/service IP was working fine which meant issues with kube-proxy can be eliminated. The next thing came to my mind was DNS. It&amp;rsquo;s always DNS right? &amp;#x1f609;&lt;/p>
&lt;p>&lt;img src="https://nabarun.dev/images/its-always-dns.png" alt="It&amp;rsquo;s always DNS">&lt;/p>
&lt;blockquote>
&lt;p>Image Source: &lt;a href="https://www.reddit.com/r/sysadmin/comments/34ag51/its_always_dns/">https://www.reddit.com/r/sysadmin/comments/34ag51/its_always_dns/&lt;/a>&lt;/p>&lt;/blockquote>
&lt;p>Looking at the CoreDNS pods, it was pretty evident that they are erroring out and something wrong is happening.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ kubectl -n kube-system get pods -l k8s-app&lt;span style="color:#f92672">=&lt;/span>kube-dns
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>NAME READY STATUS RESTARTS AGE
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>coredns-5d4dd4b4db-kmmsf 0/1 CrashLoopBackOff &lt;span style="color:#ae81ff">262&lt;/span> 22h
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>coredns-5d4dd4b4db-lnqjb 0/1 CrashLoopBackOff &lt;span style="color:#ae81ff">262&lt;/span> 22h
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Next, I fetched logs of one of the pods.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ kubectl -n kube-system logs coredns-5d4dd4b4db-lnqjb
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>.:53
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>2020-06-30T07:24:33.638Z &lt;span style="color:#f92672">[&lt;/span>INFO&lt;span style="color:#f92672">]&lt;/span> CoreDNS-1.3.1
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>2020-06-30T07:24:33.639Z &lt;span style="color:#f92672">[&lt;/span>INFO&lt;span style="color:#f92672">]&lt;/span> linux/amd64, go1.11.4, 6b56a9c
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>CoreDNS-1.3.1
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>linux/amd64, go1.11.4, 6b56a9c
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>2020-06-30T07:24:33.639Z &lt;span style="color:#f92672">[&lt;/span>INFO&lt;span style="color:#f92672">]&lt;/span> plugin/reload: Running configuration MD5 &lt;span style="color:#f92672">=&lt;/span> 5d5369fbc12f985709b924e721217843
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>2020-06-30T07:24:33.641Z &lt;span style="color:#f92672">[&lt;/span>FATAL&lt;span style="color:#f92672">]&lt;/span> plugin/loop: Loop &lt;span style="color:#f92672">(&lt;/span>127.0.0.1:59596 -&amp;gt; :53&lt;span style="color:#f92672">)&lt;/span> detected &lt;span style="color:#66d9ef">for&lt;/span> zone &lt;span style="color:#e6db74">&amp;#34;.&amp;#34;&lt;/span>, see https://coredns.io/plugins/loop#troubleshooting. Query: &lt;span style="color:#e6db74">&amp;#34;HINFO 41198296958627012.1538475969163399818.&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Voila! The error itself pointed me to the troubleshooting documentation. This kind of error logging is pretty rare to be honest and I would love to see it in more projects.&lt;/p>
&lt;p>Coming back to the problem at hand, the &lt;a href="https://coredns.io/plugins/loop">loop&lt;/a> plugin detects DNS forwarding loops and raises an error. Now, in order to see why a loop is forming, we have to look at the CoreDNS configuration.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-text" data-lang="text">&lt;span style="display:flex;">&lt;span>.:53 {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> errors
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> health
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> kubernetes cluster.local in-addr.arpa ip6.arpa {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> pods insecure
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> upstream
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> fallthrough in-addr.arpa ip6.arpa
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ttl 30
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> }
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> prometheus :9153
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> forward . /etc/resolv.conf
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> cache 30
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> loop
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> reload
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> loadbalance
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>}
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>So, any DNS request to this CoreDNS server is first processed by the &lt;a href="https://coredns.io/plugins/kubernetes">kubernetes&lt;/a> plugin and if the domain name does not match the in cluster domain patterns, the request is forwarded to the next plugin in the chain, which in this case is &lt;a href="https://coredns.io/plugins/forward">forward&lt;/a>. This is a pretty simplified explanation about what is happening. A detailed explanation can be found in the &lt;a href="https://coredns.io/manual/toc/#plugins">CoreDNS manual&lt;/a>.&lt;/p>
&lt;p>&lt;code>forward . /etc/resolv.conf&lt;/code> configures the forward plugin to use the hosts resolver configuration for DNS resolution. Let&amp;rsquo;s have a look at what it is in that file.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ kubectl -n kube-system cp coredns-5d4dd4b4db-lnqjb:/etc/resolv.conf resolv.conf
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec &lt;span style="color:#e6db74">&amp;#34;c3d8909904e9c6ced0a41e73133c1f6acf5517edd29ed866918dc3001eb6df02&amp;#34;&lt;/span>: OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused &lt;span style="color:#e6db74">&amp;#34;exec: \&amp;#34;tar\&amp;#34;: executable file not found in &lt;/span>$PATH&lt;span style="color:#e6db74">&amp;#34;&lt;/span>: unknown
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Bummer! The CoreDNS docker image used here doesn&amp;rsquo;t have the tools we need to find the files. But, let&amp;rsquo;s try to inspect the CoreDNS deployment. Kubernetes has a way to define what would be the contents of &lt;code>/etc/resolv.conf&lt;/code> and that is the &lt;code>dnsPolicy&lt;/code> field in a container specification. (More on this in later articles)&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ kubectl -n kube-system get deployments/coredns -o yaml | grep &lt;span style="color:#e6db74">&amp;#34;dnsPolicy&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> dnsPolicy: Default
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>When &lt;code>dnsPolicy&lt;/code> is set to &lt;code>Default&lt;/code>, the containers use DNS configuration of the Kubernetes node they are sheduled to. KinD runs all the nodes as Docker containers. Let&amp;rsquo;s see what&amp;rsquo;s in there:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>$ docker ps
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>a561ebab4938 kindest/node:v1.15.7 &lt;span style="color:#e6db74">&amp;#34;/usr/local/bin/entr…&amp;#34;&lt;/span> &lt;span style="color:#ae81ff">30&lt;/span> hours ago Up &lt;span style="color:#ae81ff">30&lt;/span> hours airflow-worker2
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>7150659a691a kindest/node:v1.15.7 &lt;span style="color:#e6db74">&amp;#34;/usr/local/bin/entr…&amp;#34;&lt;/span> &lt;span style="color:#ae81ff">30&lt;/span> hours ago Up &lt;span style="color:#ae81ff">30&lt;/span> hours 127.0.0.1:35725-&amp;gt;6443/tcp airflow-control-plane
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>f2f2e47b4a41 kindest/node:v1.15.7 &lt;span style="color:#e6db74">&amp;#34;/usr/local/bin/entr…&amp;#34;&lt;/span> &lt;span style="color:#ae81ff">30&lt;/span> hours ago Up &lt;span style="color:#ae81ff">30&lt;/span> hours airflow-worker
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>$ docker exec airflow-control-plane cat /etc/resolv.conf
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>nameserver 127.0.0.11
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>options ndots:0
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The nameserver specified is a localhost loopback which is implying that the resolver for the hosts &lt;em>aka Kubernetes node&lt;/em> is CoreDNS itself. This creates a circular dependency in DNS resolution and that is caught by the CoreDNS loop plugin.&lt;/p>
&lt;p>I resorted to a quick fix by changing the recursive resolver used by CoreDNS to 8.8.8.8, by modifiying the Corefile as follows and restarting CoreDNS.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-text" data-lang="text">&lt;span style="display:flex;">&lt;span> .:53 {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> errors
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> health
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> kubernetes cluster.local in-addr.arpa ip6.arpa {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> pods insecure
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> upstream
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> fallthrough in-addr.arpa ip6.arpa
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> ttl 30
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> }
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> prometheus :9153
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>- forward . /etc/resolv.conf
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>+ forward . 8.8.8.8
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> cache 30
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> loop
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> reload
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> loadbalance
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> }
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>CoreDNS pods are now running without errors and DNS resolution is working as expected.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>debug ~ &lt;span style="color:#75715e"># dig +short postgres-airflow.default.svc.cluster.local&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>10.98.135.99
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Now, the fix I did is not the perfect one. Any new containers created by Kubernetes with &lt;code>dnsPolicy: Default&lt;/code> will still face the same issue. The ideal way is to ask the resolver in the OS distribution used for nodes to not use localhost loopback for DNS resolution or use a custom resolver configuration and specifying the path to kubelet.&lt;/p>
&lt;h2 id="final-thoughts">Final Thoughts&lt;/h2>
&lt;p>It was fun trying to dive up into the problem and understand the basics of why DNS resolution was failing.&lt;/p>
&lt;p>The whole debugging process is meant to describe a thought process how you can debug an issue like this in a complex system. Straight forward answers can be found in the debugging sections &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues">here&lt;/a> and &lt;a href="https://coredns.io/plugins/loop">here&lt;/a>.&lt;/p></description></item><item><title>Hello World</title><link>https://nabarun.dev/posts/hello-world/</link><pubDate>Sat, 20 Jun 2020 00:00:00 +0000</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/posts/hello-world/</guid><description>&lt;p>It has been some time (a long time, in reality &amp;#x1f629;) that I wanted to start writing blogs about things that I learn and implement.&lt;/p>
&lt;p>Taking the extra time during the lockdown as an opportunity, I am planning to write a post atleast every week.&lt;/p></description></item><item><title>Gaming Setup</title><link>https://nabarun.dev/setup/gaming/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/setup/gaming/</guid><description>&lt;p>&lt;img src="https://nabarun.dev/images/setup.jpeg" alt="setup">&lt;/p>
&lt;p>This is the hardware configuration of my gaming/home workstation setup. (Note: Some are affiliate links)&lt;/p>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Category&lt;/th>
&lt;th>Details&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>CPU&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3yEsPfY">Intel Core i7 12700K 3.6 Ghz 8P+4E&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Motherboard&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3SU9v5d">ASUS TUF Gaming Z690-PLUS WIFI D4&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>RAM&lt;/td>
&lt;td>2 x &lt;a href="https://amzn.to/3AHrAgo">Corsair Vengeance 16GB DDR4-3200 RGB&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>GPU&lt;/td>
&lt;td>&lt;a href="https://www.nvidia.com/en-in/geforce/graphics-cards/30-series/rtx-3080-3080ti/">Nvidia RTX 3080 Founders Edition&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>OS Drive&lt;/td>
&lt;td>&lt;a href="https://amzn.to/4fVqGxa">Samsung 980 PRO 1TB&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Home Drive&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3WY6HFc">Samsung 970 EVO Plus 2TB&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>HDD&lt;/td>
&lt;td>2 x &lt;a href="https://amzn.to/4dReQSR">Seagate Barracuda 4TB&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Case&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3MiaSHb">NZXT H7 Flow&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Power Supply&lt;/td>
&lt;td>&lt;a href="https://amzn.to/4dPD6om">Corsair RM 850 80 Plus Gold&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>AIO Cooler&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3WV2vpN">Corsair H150 RGB&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Speakers&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3WXQvnk">Samsung HW-C45E/XL 2.1&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Display&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3MjX0fi">LG 27UL550 27&amp;quot; 4K&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Display&lt;/td>
&lt;td>Dell U2720Q 27&amp;quot; 4K (This one is discontinued. The newer variant is &lt;a href="https://amzn.to/3YQwbH9">U2723QE&lt;/a>)&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Keyboard&lt;/td>
&lt;td>Monsgeek M1 / ePBT Kavala / Boba U4T. Read more on the &lt;a href="https://nabarun.dev/setup/keyboards">keyboards page&lt;/a>.&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Dock&lt;/td>
&lt;td>&lt;a href="https://www.tpstech.in/products/dell-universal-usb-c-docking-station-supports-upto-three-4k-displays-d6000">Dell D6000&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Headphones&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3yVKAqY">Sony WH-1000XM5&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Mouse&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3SZtKhU">Logitech MX Master 3&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Webcam&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3WWTIDF">Logitech C930e&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Monitor Stand&lt;/td>
&lt;td>&lt;a href="https://amzn.to/3WUb2t9">Sunon Dual LED Monitor Stand - Spring Mount&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>UPS&lt;/td>
&lt;td>&lt;a href="https://amzn.to/4dxoHgX">APC BX1100C-IN 1100VA&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Operation System&lt;/td>
&lt;td>&lt;a href="https://www.microsoft.com/en-in/d/windows-11-pro/dg7gmgf0d8h4">Windows 11 Pro&lt;/a> / &lt;a href="https://pop.system76.com/">PopOS!&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h2 id="console">Console&lt;/h2>
&lt;p>I own a Sony PS5 Disc Edition to play Sony exclusives. I am a big fan of the Horizon series, Ghosts of Tsushima, and the Uncharted series.&lt;/p>
&lt;p>I have a PS4 Slim 1TB Edition that I don&amp;rsquo;t play on anymore. If anyone wants to buy it, &lt;a href="https://nabarun.dev/contact">reach out to me&lt;/a>.&lt;/p>
&lt;h2 id="handheld">Handheld&lt;/h2>
&lt;p>I also have a Nintendo Switch to play FIFA and Legends of Zelda on the go.&lt;/p></description></item><item><title>Homelab</title><link>https://nabarun.dev/setup/homelab/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/setup/homelab/</guid><description>&lt;p>Currently I am prototyping all Homelab services on a Raspberry Pi 3 and playing around with NixOS for configuration management. Plan is to run a few services behind a Tailscale network and expose all services internally to the Tailscale network and some public facing services on the web.&lt;/p>
&lt;h2 id="todo">TODO&lt;/h2>
&lt;ul>
&lt;li>Learn NixOS.&lt;/li>
&lt;li>Buy cheap commodity hardware to run computer nodes for fun&lt;/li>
&lt;li>Build a mini-ITX or similar form factor PC with decent enough compute and high amounts of storage&lt;/li>
&lt;li>Run services to enhance life&lt;/li>
&lt;li>Run backup services&lt;/li>
&lt;/ul></description></item><item><title>Keyboards</title><link>https://nabarun.dev/setup/keyboards/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/setup/keyboards/</guid><description>&lt;p>I have quite a few mechanical keyboards. Only two of them are pre-built keyboards but have hot swap switch sockets.&lt;/p>
&lt;p>Currently I use a Monsgeek M1 and Ikki68 Aurora R2 as my primary keyboards. Former is at home, latter is at my office.&lt;/p>
&lt;h2 id="todo">TODO&lt;/h2>
&lt;ul>
&lt;li>Add a component list&lt;/li>
&lt;li>Add more details of currently built configuration&lt;/li>
&lt;li>Add photos of builds&lt;/li>
&lt;/ul></description></item><item><title>Trips</title><link>https://nabarun.dev/trips/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><author>hey@nabarun.dev</author><guid>https://nabarun.dev/trips/</guid><description>&lt;p>Some of my notable trips across time.&lt;/p>
&lt;blockquote>
&lt;p>Note: This page is still WIP. I plan to expand details of each of these trips as blog posts.&lt;/p>&lt;/blockquote>
&lt;h2 id="2024">2024&lt;/h2>
&lt;ul>
&lt;li>Munnar, Kerala, India&lt;/li>
&lt;li>Malta&lt;/li>
&lt;li>Vatican City&lt;/li>
&lt;li>San Marino&lt;/li>
&lt;li>Italy&lt;/li>
&lt;li>Monaco&lt;/li>
&lt;li>France&lt;/li>
&lt;/ul>
&lt;h2 id="2023">2023&lt;/h2>
&lt;ul>
&lt;li>Kazakhstan&lt;/li>
&lt;li>Uzbekistan&lt;/li>
&lt;li>USA&lt;/li>
&lt;li>Goa, India&lt;/li>
&lt;li>Ooty, TN, India&lt;/li>
&lt;li>Turkey &amp;#x2728;&lt;/li>
&lt;li>Azerbaijan &amp;#x1f697;&lt;/li>
&lt;li>Germany&lt;/li>
&lt;li>Austria&lt;/li>
&lt;li>Liechtenstein&lt;/li>
&lt;li>Switzerland&lt;/li>
&lt;li>Luxembourg&lt;/li>
&lt;li>Varkala, Kerala, India&lt;/li>
&lt;li>Malaysia&lt;/li>
&lt;li>Thailand&lt;/li>
&lt;/ul>
&lt;h2 id="2022">2022&lt;/h2>
&lt;ul>
&lt;li>Belgium&lt;/li>
&lt;li>Netherlands&lt;/li>
&lt;li>Spain&lt;/li>
&lt;li>France&lt;/li>
&lt;/ul>
&lt;h2 id="2021">2021&lt;/h2>
&lt;p>:virus:&lt;/p>
&lt;h2 id="2020">2020&lt;/h2>
&lt;p>:virus:&lt;/p>
&lt;h2 id="2019">2019&lt;/h2>
&lt;ul>
&lt;li>USA&lt;/li>
&lt;li>Spain&lt;/li>
&lt;/ul>
&lt;h2 id="2018">2018&lt;/h2>
&lt;ul>
&lt;li>Singapore&lt;/li>
&lt;li>Rishikesh, Uttarakhand, India&lt;/li>
&lt;li>Shimla, Himachal Pradesh, India&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>Note: I will add trips from 2017 and before when I get to detailing them.&lt;/p>&lt;/blockquote></description></item></channel></rss>