<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Balaram Neupane]]></title><description><![CDATA[This is a place where I try to articulate anything I find interesting or worth sharing with everyone.]]></description><link>https://blogs.balaramneupane.com.np</link><generator>RSS for Node</generator><lastBuildDate>Sun, 26 Apr 2026 06:21:09 GMT</lastBuildDate><atom:link href="https://blogs.balaramneupane.com.np/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Production Memory Leak I Solved by Accident.]]></title><description><![CDATA[I’m writing this a few months after the whole thing went down. At the time, I jotted notes here and there while debugging, but I never properly sat down to document it. So yeah, some details are a bit]]></description><link>https://blogs.balaramneupane.com.np/the-production-memory-leak-i-solved-by-accident</link><guid isPermaLink="true">https://blogs.balaramneupane.com.np/the-production-memory-leak-i-solved-by-accident</guid><dc:creator><![CDATA[Balaram Neupane]]></dc:creator><pubDate>Mon, 30 Mar 2026 15:03:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69691760d0b08eece4551b4d/a4f8cb3a-7d57-4f5a-8c06-1efb495d68c4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I’m writing this a few months after the whole thing went down. At the time, I jotted notes here and there while debugging, but I never properly sat down to document it. So yeah, some details are a bit hazy now. But, I have kept most of the important parts intact.</p>
<h2>The Setup</h2>
<p>We had a pretty simple architecture.</p>
<p>A web app built with Java and another AI app built using FastAPI and LangGraph.</p>
<p>The main app (Java) would send a request to the AI service. That service would process it and send the result back via a webhook. Two completely separate services. Two separate Postgres databases. Every request and response was stored in both sides.</p>
<p>That separation ended up saving us a lot of guesswork later. When things broke, it was obvious which side to look at.</p>
<h2>The First Crash</h2>
<p>It happened during peak US hours. Everything just stopped and the IP was not responding.</p>
<p>An alert fired. Two health checks were fine, but the EC2 reachability check failed. The instance was inaccessible.</p>
<p>I checked logs if there were incoming requests around that time, but none of them made it through. The database had no record of them. It was like they vanished mid-flight.</p>
<p>After a couple of hours, we restarted the instance. Things came back to life like nothing had happened.</p>
<p>Not great. But also not catastrophic. This service wasn’t user-facing in real time, so delays were acceptable. We could reprocess missed requests because each one was tied to a specific asset. So we did that and moved on.</p>
<p>But “restart and pray” isn’t a real solution. Something was clearly wrong.</p>
<h2>Chasing the Obvious (and Wrong) Idea</h2>
<p>My first thought was: memory leak. It crashed, we restarted, and everything worked again. That’s usually a dead giveaway. So I tried to reproduce it locally. Same instance type. Same specs and I spammed it with concurrent requests.</p>
<p>Nothing.</p>
<p>No crash. No slowdown. It just kept running like it had something to prove. Still, I didn’t trust it. Maybe it was too many parallel requests eating up a lot of RAM. So I added a Redis queue and limited processing to two concurrent threads.</p>
<p>Good change overall. Felt responsible. But, this was still not going to take us to the root of the problem.</p>
<h2>Actually Looking at Memory</h2>
<p>Then I realized something embarrassingly basic. We weren’t even tracking RAM usage. By default, EC2 doesn’t show memory metrics. You have to install an agent for that. Which… we hadn’t done. So I set that up. And there it was. Memory usage was climbing steadily and never dropping. Eventually, the system would choke and die.</p>
<p>The weird part was, It didn’t happen often. Maybe once every couple of weeks. It only occurred 3 times in total. Not frequent enough to easily debug, but frequent enough to be a real problem. At this point, it felt like trying to catch a bug that only shows up when it feels like it.</p>
<h2>The Moment Things Clicked</h2>
<p>The breakthrough didn’t come from staring harder at logs. It came from a random question that popped into my head while reading something unrelated: “How does this system keep track of multiple threads without mixing things up?” That had me digging.</p>
<p>Turns out, it uses a check pointer to store thread state. And if you don’t configure anything external, it just keeps everything in memory. Every request. Every thread. Every checkpoint.</p>
<p>All sitting in RAM.</p>
<p>And this is the important part; none of it was being cleaned up (This is a design decision that allows resuming the conversation at any point in time, although a debatable better decision would have been to default to a file based check pointer, rather than an In memory one). So with every request, memory usage ticked up a little. And then a little more. And then a little more.</p>
<p>Until eventually, the instance just ran out of room and gave up.</p>
<h2>Why I Never Saw It in Dev</h2>
<p>This also explained why I couldn’t reproduce it. In development, the service restarted all the time, every deploy, every change. Each restart wiped the memory clean. So the leak never had time to build up.</p>
<p>In production, though, the service stayed up for days or weeks. That’s where the problem had space to grow.</p>
<h2>The Fix</h2>
<p>Once the problem made sense, the fix was straightforward. We stopped storing state in memory and moved it to Postgres. Now, instead of piling everything into RAM, it gets persisted properly. Memory usage stays flat, no matter how many requests come in.</p>
<p>We also switched to a more appropriate instance type. Since then, no crashes. No slow memory creep. Just a stable system doing what it’s supposed to.</p>
<h2>What I Learned (the Hard Way)</h2>
<p>A few things I wish I had done differently:</p>
<p><strong>Write things down while debugging.</strong> You will forget details. Numbers, timestamps, weird observations all of it. Don’t trust your memory. It’s unreliable when you need it most.</p>
<p><strong>Don’t assume your monitoring is enough.</strong> We didn’t even have RAM metrics. That alone delayed the diagnosis way more than it should have.</p>
<p><strong>Know your tools beyond the happy path.</strong> Defaults are often designed for convenience, not scale. Something that works perfectly in dev can quietly destroy you in production.</p>
<p><strong>Dev and prod behave differently in subtle ways.</strong> Frequent restarts in dev were hiding the issue entirely. That gap matters more than it seems.</p>
<p><strong>Sometimes the answer comes from sideways thinking.</strong> The breakthrough didn’t come from grinding harder. It came from curiosity about something loosely related. That’s often how these things work.</p>
<hr />
<p>Looking back, none of it was very complex. But, the chasing through the invisible is what made this find stick.</p>
]]></content:encoded></item><item><title><![CDATA[Election: Are you making an impulsive decision? ]]></title><description><![CDATA[For many people, choosing a party or candidate is largely an emotional decision. The definition of right candidate varies.

For some, the right candidate is someone who once offered them a helping han]]></description><link>https://blogs.balaramneupane.com.np/election-are-you-making-an-impulsive-decision</link><guid isPermaLink="true">https://blogs.balaramneupane.com.np/election-are-you-making-an-impulsive-decision</guid><dc:creator><![CDATA[Balaram Neupane]]></dc:creator><pubDate>Mon, 02 Mar 2026 17:18:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69691760d0b08eece4551b4d/47c05d1b-8340-4035-8641-20b0aa34c8fc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For many people, choosing a party or candidate is largely an emotional decision. The definition of right candidate varies.</p>
<ul>
<li><p>For some, the right candidate is someone who once offered them a helping hand.</p>
</li>
<li><p>For others, it’s the representative they see regularly or feel personally connected to.</p>
</li>
<li><p>For some, it’s the party that happened to be in power when a decision favoring them was made.</p>
</li>
<li><p>For others, it’s about opposing someone they dislike (supporting a rival out of resentment rather than conviction (quite visible in uml-ncp supporters).</p>
</li>
<li><p>And for some, it’s about personality; the way certain leaders present themselves.</p>
</li>
</ul>
<p>These are thought processes I’ve observed among people in my community when asked about their voting decisions. Many of these choices feel understandable, but also incomplete.</p>
<p>I strongly believe that voting decisions shouldn’t be driven solely by emotion. Political rationale should play a central role. Evaluating a candidate’s policies, their track record, competence, integrity, and long-term vision. Asking, what realistic change a candidate can bring, how their policies, if in action will affect me and the community, do they have a history of delivering with accountability?<br />Having said so have you truly thought through to come to your conclusion? Have you convinced yourself? There's no right decision here. But, I hope you're not simply rationalizing an emotional impulse.</p>
]]></content:encoded></item><item><title><![CDATA[Why your multi-threaded python code is not faster?]]></title><description><![CDATA[This article was not written by AI. It is human written, and I would appreciate any feedbacks on it.
We must have heard how python isn’t truly parallel. Why does it happen? What actually is the Global Interpreter Lock and how it is a culprit behind t...]]></description><link>https://blogs.balaramneupane.com.np/why-your-multi-threaded-python-code-is-not-faster</link><guid isPermaLink="true">https://blogs.balaramneupane.com.np/why-your-multi-threaded-python-code-is-not-faster</guid><category><![CDATA[Python]]></category><category><![CDATA[memory-management]]></category><category><![CDATA[Global interpreter lock]]></category><category><![CDATA[Advanced Python Concepts]]></category><dc:creator><![CDATA[Balaram Neupane]]></dc:creator><pubDate>Sat, 17 Jan 2026 13:38:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768657401899/417ee8f0-9bc4-445d-b679-ae874662d30b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was not written by AI. It is human written, and I would appreciate any feedbacks on it.</em></p>
<p>We must have heard how python isn’t truly parallel. Why does it happen? What actually is the Global Interpreter Lock and how it is a culprit behind the lack of true parallelism in python. I try to explain it while also explaining a few internal workings of the language with a very high level oversimplification of the actual workings.</p>
<p>In short, GIL is a lock, that allows only one thread at a time to execute python bytecode within the <code>cpython</code> interpreter.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>To understand GIL, we need to understand the following concepts.</p>
<ol>
<li><p><strong>IO Bound v/s CPU Bound</strong></p>
<p> The core reason behind need for building parallel systems is the performance bottleneck which arises from either IO wait or processor’s limit.</p>
<ul>
<li><p>An IO bound task spends most of it’s time waiting for something outside the CPU to respond. For eg: Querying a database depends on the database, downloading a file from internet etc.</p>
</li>
<li><p>A task is CPU bound, if it’s speed depends entirely on how fast the processor can crunch numbers. That means, a faster processor can complete the task faster unlike the case in IO bound. Eg: image processing, mathematical calculations etc.</p>
</li>
</ul>
</li>
</ol>
<p>In IO bound task, processor speed isn’t the bottleneck while in CPU bound, it is.</p>
<ol start="2">
<li><p><strong>Threads v/s Processes</strong></p>
<ul>
<li><p><strong>Threads</strong></p>
<p>  A thread is a lightweight unit of execution that lives inside the process. Every process starts with at least one thread. Multiple threads in a process, share the same memory, same interpreter and same GIL.</p>
</li>
<li><p><strong>Process</strong></p>
<p>  It is an independent running instance of a program. It has it’s own python interpreter, it’s own memory space and it’s own GIL. If we spawn a separate process for each task, then we achieve true parallelism because each task will have a GIL of it’s own. But, this isn’t an efficient approach and we’ll understand why later.</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-how-python-manages-memory">How Python manages memory?</h2>
<p>Python uses a technique called Reference Counting. In Reference Counting, each object created in python have a reference count, that tracks the number of references that point to that object. When reference count reaches zero, the memory occupied is freed.</p>
<p>Python also uses Garbage collection as a way to manage memory. But the problem of GIL arises from Reference Counting. Why was reference counting preferred in the first place is a separate topic of discussion, which we will not go in this article.</p>
<h2 id="heading-problem-that-gil-solves">Problem that GIL solves</h2>
<p>The reference count variable is susceptible to race conditions if two threads are allowed to read or write to that value simultaneously. This can result in leaked memory that is the memory which is never released even after the end of the program. Or it can incorrectly release the memory while a reference to that object still exists.</p>
<p>So, GIL is a OS level lock. It is a single lock, that the thread must acquire before executing the python byte code. This way, only one thread will be able to modify the reference count. Due to this reason, even if we spawn multiple threads, only one thread can perform a CPU-bound task at a time. However, during IO, the GIL is released. During this case, the threads can work in parallel too.</p>
<p>How threads can acquire GIL has a different story. In the old way(before python3.2), it was based on ticks. A check occurred every 100 ticks to check if the thread should give up the lock. But modern way, is based on timed wait. A thread will run, until second thread requests for the lock. After the request is made, the second thread waits for certain time and forces the first thread to drop the lock.</p>
<p>We can achieve true parallelism in python if we spawn multiple processes instead of the thread. But creating a process is an expensive operation and may not necessarily provide better speed even with parallelism due to the overhead of launching a separate interpreter and memory spaces. In some cases, a single threaded program can be faster than a program using multiple processes.</p>
<h2 id="heading-what-could-have-been-an-alternative-solution-to-gil">What could have been an alternative solution to GIL?</h2>
<ul>
<li><p>One idea would be to provide a single lock to each data structure that are shared across threads, so that can can’t be modified inconsistently. But, this solution can result in a deadlock, a condition where one lock is waiting for other to be freed in loop. This can also result in a reduced efficiency caused by repeated acquisition and release of locks.</p>
</li>
<li><p>Another viable approach is to use a priority scheme. Penalizing the threads that use their full time slice and giving bonus to those that give up CPU voluntarily (for IO-bound tasks).</p>
</li>
</ul>
<h2 id="heading-why-was-gil-chosen">Why was GIL chosen?</h2>
<p>The major reason was to provide thread safety when integrating c-extensions. C-extensions weren’t thread safe by default, but python gave a way to make these reliable and thread-safe. These c-extensions became one of the primary reasons for adoption of python by the dev community.</p>
<p>To learn more more about the technicalities of GIL, the reader can refer to this talk posted by David Breazley from Pycon’2010. : <a target="_blank" href="https://www.youtube.com/watch?v=Obt-vMVdM8s">https://www.youtube.com/watch?v=Obt-vMVdM8s</a></p>
<h2 id="heading-current-status-of-gil">Current Status of GIL</h2>
<p>Many attempts were made to remove GIL, one of the famous ones namely Gilectomy.</p>
<p>As of python 3.13, an experimental free threaded mode was released. It uses advanced memory management techniques like mimalloc and biased reference counting. In spite of this, the GIL remains the current default.</p>
<h2 id="heading-unorganized-points">Unorganized points</h2>
<ul>
<li><p>The original python interpreter <code>cpython</code> these macro definitions are what implement the GIL.</p>
<ul>
<li><p><code>Py_BEGIN_ALLOW_THREADS</code>: Releasing the GIL by a thread.</p>
</li>
<li><p><code>Py_END_ALLOW_THREADS</code>: To re-acquire the GIL by a thread.</p>
</li>
</ul>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[CI/CD with CircleCI | Step by Step guide]]></title><description><![CDATA[As a developer, writing code isn’t enough, you need to develop skills to be able to put your work online for everyone to see. In the modern software engineering, deployment isn’t enough. The changes you make should reflect quickly as well. That’s whe...]]></description><link>https://blogs.balaramneupane.com.np/cicd-with-circleci-step-by-step-guide</link><guid isPermaLink="true">https://blogs.balaramneupane.com.np/cicd-with-circleci-step-by-step-guide</guid><category><![CDATA[deployment]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Docker]]></category><category><![CDATA[CircleCI]]></category><dc:creator><![CDATA[Balaram Neupane]]></dc:creator><pubDate>Thu, 15 Jan 2026 16:56:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768496309115/26f985fe-3d5c-4f4a-bbc0-8e18aee173ff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>As a developer, writing code isn’t enough, you need to develop skills to be able to put your work online for everyone to see. In the modern software engineering, deployment isn’t enough. The changes you make should reflect quickly as well. That’s where CI/CD comes in. Continuous Integration and Continuous deployment allows you to build, test and deploy your application and any changes without a need of manual intervention.</p>
<p>In this article I’ve compiled a comprehensive guide on setting up a CI/CD pipeline for a Django application, from repository setup to deployment using Docker in EC2. Most of it can be applied to other frameworks as well.</p>
<hr />
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p>Basic knowledge of Git, Docker and Django.</p>
</li>
<li><p>A GitHub account.</p>
</li>
<li><p>A server (e.g., an EC2 instance or any machine accessible via SSH).</p>
</li>
<li><p>CircleCI account (integrated with GitHub).</p>
</li>
<li><p>Docker Hub and AWS credentials.</p>
</li>
</ul>
<h3 id="heading-background">Background</h3>
<p>What will we be doing in this tutorial?</p>
<p>The goal is to setup our project to run on docker. Test it locally, push the image to a repository(DockerHub in this case). Now, we can use this image anywhere to run our project with the same settings.</p>
<p>So, we will log into a server(Ec2) using ssh. Pull the docker image. And run it in the server itself. This entire process will be written in the config file. Make sure to go through it carefully. Everything else is just the setup to make sure the workflows defined in <code>config.yml</code> file work properly.</p>
<p>This tutorial is just going to help you make your hands dirty. You’ll need to understand more to deploy your own application. I will try to clarify on topics that require more exploration.</p>
<h3 id="heading-step-1-set-up-a-github-repository">Step 1: Set Up a Github Repository</h3>
<ol>
<li><p><strong>Create a Repository</strong>: Login to GitHub and create a new repository named <code>django-circleci-demo</code></p>
</li>
<li><p><strong>Clone the Reposi</strong><a target="_blank" href="https://circleci.com/"><strong>tory</strong>:</a></p>
</li>
</ol>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/&lt;your-username&gt;/django-circleci-demo.git 
<span class="hljs-built_in">cd</span> django-circleci-demo
</code></pre>
<ol start="3">
<li><strong>Initialize a Django Project</strong>:</li>
</ol>
<p><code>django-admin startproject myapp</code></p>
<p><code>python manage.py runserver</code></p>
<p><strong>4. Commit and Push</strong> <strong>Code:</strong></p>
<p><code>git add .</code></p>
<p><code>git commit -m "Initial commit"</code></p>
<p><code>git push origin main</code></p>
<h3 id="heading-step-2-configure-the-server-with-ssh">Step 2: Configure the server with SSH</h3>
<p>(When initially setting up a server, you’ll either download a <code>.pem</code> file and use it to access the server everytime, or keep it in <code>.ssh</code> config and create a username to login using this or other settings. With the below approach, you can generate a ssh key in your local machine, then login to your server with in browser shell or other ways, copy the public key, and then use this key to access that server from terminal in your local machine.)</p>
<ol>
<li><p><strong>Set Up SSH Access</strong>:</p>
</li>
<li><p><strong>Generate an SSH key</strong> <a target="_blank" href="https://circleci.com/"><strong>pair</strong></a></p>
</li>
</ol>
<ul>
<li><code>ssh-keygen -t rsa -b 4096 -C “your-email@example.com”</code></li>
</ul>
<p><strong>3. Copy the public</strong> <strong>key to your server in</strong> <code>.ssh/authorized_keys</code>.</p>
<p><strong>4. Test SSH Access</strong>: <code>ssh user@your-server-ip</code></p>
<p><strong>5. Set Up the Server:</strong></p>
<ul>
<li>Install Docker:</li>
</ul>
<pre><code class="lang-plaintext">sudo apt update 
sudo apt install docker.io 
sudo systemctl start docker 
sudo systemctl enable docker
</code></pre>
<ul>
<li>Install AWS CLI:</li>
</ul>
<pre><code class="lang-plaintext">curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" 
unzip awscliv2.zip 
sudo ./aws/install
</code></pre>
<h3 id="heading-step-3-circleci-configuration">Step 3: CircleCI Configuration</h3>
<ol>
<li><strong>Add CircleCI to the</strong> <strong>Repository</strong>: Go to CircleCI.</li>
</ol>
<ul>
<li>Connect your GitHub repository to CircleCI.</li>
</ul>
<p><strong>2. Set Up Environme</strong><a target="_blank" href="https://circleci.com/"><strong>nt Varia</strong></a><strong>bles</strong>:</p>
<ul>
<li><p>Navigate to Project Settings &gt; Environment Variables in CircleCI.</p>
</li>
<li><p>Add the following variables:</p>
</li>
<li><p><code>DOCKER_USERNAME</code>: Your Docker Hub username.</p>
</li>
<li><p><code>DOCKER_PASSWORD</code>: Your Docker Hub password.</p>
</li>
<li><p><code>AWS_ACCESS_KEY_ID</code>: Your AWS Access Key ID.</p>
</li>
<li><p><code>AWS_SECRET_ACCESS_KEY</code> : Your AWS Secret Access Key.</p>
</li>
</ul>
<p>(Make sure the IAM user you’re using has enough permissions for accessing EC2 server, if your application has other components, then the user should have those permissions as well. Using ECR instead of DockerHub for pushing docker images is a common approach as well. If you’re planning to go through this approach, ECR permissions will also be required.)</p>
<ul>
<li><p><code>SERVER_HOST</code>: Server IP.</p>
</li>
<li><p><code>SERVER_USERNAME</code>: Server username.</p>
</li>
<li><p><code>SERVER_PASSWORD</code>: Server password.</p>
</li>
<li><p><code>BE_ENV</code>: Environment variable specific to the project. You can <code>base64</code> encode the variables when keeping it in circleci config, and decode it in the server itself to use it for enhanced security.</p>
</li>
</ul>
<h3 id="heading-step-4-dockerize-the-django-app">Step 4: Dockerize the Django App</h3>
<ol>
<li><strong>Create a</strong> <code>Dockerfile</code>:</li>
</ol>
<pre><code class="lang-plaintext">FROM python:3.10-slim 
WORKDIR /app 
COPY requirements.txt /app/ 
RUN pip install -r requirements.txt 
COPY . /app/ 
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
</code></pre>
<p><strong>2. Add</strong> <code>requirements.txt</code>:</p>
<pre><code class="lang-plaintext">Django&gt;=3.2,&lt;4.0
</code></pre>
<p><strong>3. Build and Test Docker image Locally</strong>:</p>
<pre><code class="lang-plaintext">docker build -t django-circleci-demo . 
docker run -p 8000:8000 django-circleci-demo
</code></pre>
<h3 id="heading-step-5-push-docker-image-to-docker-hub">Step 5: Push Docker Image to Docker Hub</h3>
<ol>
<li><strong>Log In to Docker Hub:</strong></li>
</ol>
<pre><code class="lang-plaintext">docker login -u &lt;DOCKER_USERNAME&gt; -p &lt;DOCKER_PASSWORD&gt;
</code></pre>
<p><strong>2. Push Image</strong>:</p>
<pre><code class="lang-plaintext">docker tag django-circleci-demo &lt;DOCKER_USERNAME&gt;/django-circleci-demo 
docker push &lt;DOCKER_USERNAME&gt;/django-circleci-demo
</code></pre>
<h3 id="heading-step-6-circleci-pipeline">Step 6: CircleCI Pipeline</h3>
<ol>
<li>Create <code>.circleci/config.yml</code>:</li>
</ol>
<pre><code class="lang-plaintext">version: 2.1
jobs:
  build-and-deploy:
    docker:
      - image: circleci/python:3.10
    steps:
      - checkout
      - setup_remote_docker
      - run:
          name: Install Docker
          command: |
            sudo apt-get update
            sudo apt-get install -y docker.io
      - run:
          name: Build Docker Image
          command: docker build -t django-circleci-demo .
      - run:
          name: Push to Docker Hub
          command: |
            echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
            docker tag django-circleci-demo $DOCKER_USERNAME/django-circleci-demo
            docker push $DOCKER_USERNAME/django-circleci-demo
      - run:
          name: Deploy to EC2
          command: |
            sshpass -p $SERVER_PASSWORD ssh -o StrictHostKeyChecking=no $SERVER_USERNAME@$SERVER_HOST "
            docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD &amp;&amp;
            docker pull $DOCKER_USERNAME/django-circleci-demo &amp;&amp;
            docker run -d -p 8000:8000 $DOCKER_USERNAME/django-circleci-demo"
workflows:
  version: 2
  build-and-deploy:
    jobs:
      - build-and-deploy
</code></pre>
<hr />
<h3 id="heading-step-6-access-the-application">Step 6: Access the application</h3>
<p>Navigate to <code>http://&lt;your_ec2_public_ip&gt;:8000</code></p>
]]></content:encoded></item></channel></rss>