How Fly.io and Tailscale Saved Notado
A technical overview of Notado's migration to Fly.io
tl;dr Impact of Digital Ocean’s 2022 price hikes, technical details of how Notado was modified to migrate from Digital Ocean to Fly.io, technical details of using Tailscale to make private connections from a Fly micro VM to a Digital Ocean managed Postgres database, overview of new features on Notado, the end of the free open beta and the introduction of subscriptions for Notado in 2023 at $5.99/month.
Since the last blog post back in 2020, and until a few weeks ago, Notado had been quietly running on a Digital Ocean Kubernetes cluster, working hard saving, indexing and serving content highlights across the internet for over 3000 beta users.
Before getting into the details of how that cluster was replaced by a microVM on Fly.io, allow me to take a moment to re-introduce Notado, the service, and to introduce for the first time the cast behind Notado.
Notado is a content-first bookmarking service, where the paragraphs and sentences that make you want to bookmark something are treated as first-class citizens rather than pieces of additional metadata.
You may be in the target audience for Notado if
You are interested in topics for which a significant portion of the thought-provoking content is produced in discussion comments rather than online articles and published books
You like the idea of being able to save both highlights from an article and analytical comments on the same article together in one place
You miss the days of the internet when there was an RSS feed for everything (Notado has public and private RSS feeds for everything)
Now, onto the cast.
The Cast of Notado, 2020 — 2022
The muscle, an Actix-Web micro service that handles extracting content from the comment permalinks of different websites, either via their APIs or by scraping HTML where an API doesn’t exist
The diplomat, an Actix-Web micro service that handles sending saved content to external services for which integrations have been built; Readwise, Instapaper, Pinboard
The socialite, a Discord bot powered by Serenity which users can share comment permalinks and highlights with to save them to their accounts
The caretaker, a Go micro service that listens for inserts to a PostgreSQL table and sends the content over to Meilisearch to enable best-in-class, fault and typo-tolerant multilingual search
Notado on Digital Ocean
For over 95% of my career working in software development, I have worked in Platform, Infrastructure, DevOps, whatever you want to call it. I live and breathe HashiCorp and Kubernetes. Well, not really. But kind of.
Naturally, when I was building Notado, my first thought was “oh, I’ll just throw it on a managed Kubernetes cluster somewhere”.
For getting up and running, it was not a bad idea. I was able to throw together a Terraform project and the required Kubernetes manifests without much thought and everything just worked. For almost two years, it just worked largely uninterrupted.
So… What was the problem? For the smallest Kubernetes cluster I could configure on Digital Ocean at the time (and a load balancer, of course) I was paying just shy of $54 a month. With the cost of a managed Postgres database, some storage and taxes on top, the bill for running Notado came up to just under $100 every month.
If you think that sounds like overkill, it’s because it was completely overkill.
One of the nice things about deploying Rust services is the low resource usage footprint, but unfortunately that doesn’t really mean anything if you make lazy infrastructure choices like I did.
A few months ago there must have been a price hike over at Digital Ocean because my monthly bill started coming up to over $100, and for me, this was the straw that broke the camel’s back.
That silly mental threshold of “at least the bill is under $100 a month” had been crossed, and there was no turning back.
That sounds pretty sudden, dramatic, and triumphant, right? Well, if I’m being honest, this price hike marked the beginning of a period of depression for me in relation to Notado. At a particularly low point I had even reached out to Readwise on Twitter to see if they were interested in acquiring any parts of the Notado codebase.
I had been watching Fly.io with great admiration over the past two years. Whenever they would put out a new blog post, I would briefly think “damn, I wish I could migrate Notado over to Fly” but ultimately, I did not have the energy or mental capacity to rearchitect Notado to fully take advantage of Fly’s deployment model.
After the Digital Ocean price hike and hitting a rock bottom of sorts with my relationship to Notado, I started to spend time poring through the Fly.io documentation and putting together a plan to migrate as much of Notado as I could off of Digital Ocean and on to Fly.
In the Kubernetes world, Deployments that you expect connectivity with tend to also have Services. I won’t go into too much detail, but you can basically connect to something running inside a cluster on an address that looks something like
<service>.<namespace>.svc.cluster.local. If you send something to this address, your requests will go to the matching service if it exists and if it accepts connections.
Let’s go back to our cast for a moment; they are mostly services that take requests and give responses among each other, so they all had DNS records like this inside of the Kubernetes cluster, and this is how they communicated with each other.
For example, when a user sent a comment permalink to Notado to be saved, something like the following happened:
User sends a request to save a comment to notado.app
notado.app sends a request to
notado-scraper.production.svc.cluster.localto get the content of the comment from the permalink
notado.app sends another request to
notado-integrations.production.svc.cluster.localto push this newly saved content to any external integrations that the user has configured
Hopefully this is still making sense.
My first task was to get these services communicating with each other in a different way, because these
svc.cluster.local addresses only exist within the context of a Kubernetes cluster.
Thankfully, it was in my very first job that I learned to read configuration values like these from environment variables instead of hard-coding them, so this step was as simple as making a couple of configuration changes from
svc.cluster.local addresses to
“Hang on!!” I hear some readers exclaim. “
localhost?? I thought we were deploying this thing to a production environment on Fly.io!?”
Yes my friends, we are, but since Notado is composed largely of resource-efficient Rust services, I thought, “why not get this all running on a single Fly micro VM and reduce all the unnecessary complexity (and cost!) that was added to the deployment architecture by my initial lazy choice of Kubernetes as the deployment target?”
After switching out the configuration variables to allow all of the services to communicate with each other on
localhost, my next task was to figure out how to handle multiple processes on a single Fly micro VM.
I highly recommend this approach to anyone else who finds themselves in a similar situation; it is simple, elegant and it just works.
You may have noticed at this point that I was not just planning a move from Digital Ocean to Fly.io, but I was in fact also planning a fundamental shift in strategy from horizontal scaling to vertical scaling.
With Kubernetes, it’s pretty common to scale services horizontally. This basically means that when you have a service that is struggling to respond to all of the requests being received, you can just add another one (insert DJ Khaled meme) alongside it to help spread the load of the requests.
If you keep adding more and more replicas of a service, you can start to visualise a horizontal line of services all trying to catch incoming requests.
In moving away from Digital Ocean’s managed Kubernetes offering, I was planning to run everything on a single micro VM on Fly.io, and then simply increase the capacity of that micro VM whenever I needed in the future.
This is vertical scaling in a nutshell; using a bigger boat.
I think that especially for those of us working in the areas of Platform, Infrastructure, DevOps etc., who then go on to build services as solo developers, we can get a little lost in all of the complex and fancy tooling that we become accustomed to in our day jobs.
With some careful planning, we can make it so that a single VM is all that is needed, and whenever that is within the realm of possibility, it is worth putting in the additional work up-front to make it a reality.
All in all, migrating services seemed pretty easy, right? Next came the part that I was dreading. Database migration. I am not exaggerating when I say that I was actually losing sleep over this.
One day I noticed a submission on Hacker News, a new blog post from Tailscale, and I had a thought. “What if I don’t even have to migrate the database?”
After consuming the excellent documentation on the Tailscale website about setting up a subnet router, I decided that for the foreseeable future I would keep using my managed Postgres database service on Digital Ocean and just use Tailscale to be able to connect from the Fly MicroVM to Postgres on a private IP address.
Let’s break this down a little.
Provisioning a Tailscale Proxy VM
This is pretty straight forward, I used cloud-init to provision a VM in the same VPC as the managed Postgres database, installed Tailscale, and started it with the private CIDR range of the VPC passed to the
Updating Firewall Rules
By default, the new proxy VM was not able to connect to Postgres due to the strict database firewall rules I had previously set, so I updated the firewall rules to allow connections from the private IP address of the proxy VM.
Launching Tailscale on Fly Micro VMs
The final step was to ensure that Tailscale was also installed in the Docker container deployed to Fly.io, and then starting Tailscale with the
—-accept-rules=true flag before letting
overmind bring up all the processes that run on the micro VM.
Bringing it all together
With all of this finally in place, whenever a service running on the Fly MicroVM made a request to a domain resolving to an IP address in the CIDR block passed to Tailscale in step 2, the request would be sent through Tailscale, over to the Digital Ocean droplet with Tailscale installed, and from there on to the managed Postgres database.
The nice thing about this approach is that you can pretty much apply it to any managed database on any cloud provider if you want to move your actual services over to Fly.io but keep the same managed database. The main consideration to keep in mind is the geographical distance between the managed database and your desired Fly.io region(s).
At this point, I had a fully functioning staging environment on Fly.io and I felt a huge weight lifted from my shoulders.
I realised that I had started believing in myself and my ability to build and manage a non-trivial service again. I also realised that somewhere along the way, I had stopped believing in myself and my ability to build and manage a non-trivial service, but that is a topic for another article.
I guess this technically deserves its own “phase” heading, since it covers switching the production environment from Digital Ocean to Fly.io, but in actual fact it was pretty uneventful.
I had both production instances running side by side for a while, reading from and writing to the same database, but with separate Meilisearch instances.
When I was ready, I changed the DNS record for notado.app to point to the instance running on Fly.io and waited for the updates to propagate. To make sure nothing was missing from the new Meilisearch instance, I fully rehydrated it from Postgres. That’s pretty much it.
I then went ahead and deleted the trusty old Kubernetes cluster on Digital Ocean that had served me so well for the last two years while burning a hole in my pocket, and started looking to the future.
Ready Set Go!
To say that my productivity working on the Notado codebase skyrocketed once I had migrated everything over to Fly.io would be a huge understatement.
The cast of Notado services introduced at the beginning of this article have morphed and changed in ways better adapted to the new environment on which they are deployed and now live. I will be sharing more on what those transformations look like here in the future.
I am excited to share my lessons and takeaways from building, iterating on and managing Notado over the past two years in a series of upcoming technical articles. Stay tuned for my thoughts on:
Betting on the Rust programming language
Tokio, Messaging Queues and Async Rust
SwiftUI, Flutter, Native Applications and iOS Shortcuts
If you can’t wait, check out my Software Development Feed, where all of the interesting highlights from everything I read online related to building good (and bad) software development experiences get published (there is of course also an RSS feed if you want to subscribe).
Wait! What about Notado?
Notado is healthier and happier than ever!
After two years of using Notado exclusively, I could not imagine going back to any other service. Notado just feels right. It is everything that I have ever wanted in a bookmarking service.
There is nothing quite like being perpetually frustrated with every tool available for a job, finally biting the bullet and creating a tool that addresses your needs, and then having the pleasure and satisfaction of using that tool which now does exactly what you want and then gets out of the way.
After putting in this recent work to bring the operating cost down to a more sustainable sum, and significantly reducing the complexity of managing hosting and deployments, I can confidently say that Notado will live on as long as I do (and hopefully beyond!), even if as nothing else, then as the one place where I will save and organise the highlights from everything I read online until the end of my time.
Here is a run down of some of the new features that have rolled out since the migration to Fly.io was completed:
Kindle highlight import (without requiring access to your Amazon account or having to copy and upload files from your physical device!)
Full support for Readwise-style inline tagging for Kindle highlight imports
Hacker News favourite comments import
Tag aliases to make tagging even easier on Kindle or anywhere else (eg
.qsf => quotes science-fiction)
Login via NotadoBot
Deeply integrated iOS shortcuts for saving comment permalinks and saving text selection highlights from Safari
Real-time filtering for titles in your Notado Library as you type
A new landing page with some nice interactive elements
There will be dedicated articles taking a closer look at some of these features coming out soon.
What about the never-ending free open beta?
I am incredibly grateful to the 3000+ users that have been participating in Notado’s free open beta since it began in 2020. I honestly never thought that my idea of “content-first bookmarking” would resonate with so many people.
The open beta will come to an end this year, and from January 2023, subscriptions for Notado will be available at $5.99/month. Users who create an account in 2023 will have a free 30 day trial (no payment details required) before being asked to purchase a subscription.
Users will always have access to all of their previously saved content regardless of subscription status.
There will always be a quick and easy way to export all of your content in a machine-readable format with a sane schema that closely resembles what Notado shows you, and respects the time, energy and effort that you have put into the organisation of your saved content library on Notado.
Of course, if you wish for your account and all related saved content to be removed from Notado at the end of the open beta or a trial period, just send an email. You don’t have to worry about giving a reason, just say that you want your account and related content deleted, and I’ll be happy to confirm once it is done (I mean really deleted, not just a soft-delete with a “deleted” column in the database set to “true”!)