Enviame - Interim Writeup
Table of Contents
- Introduction
- Technical Overview
- Design Decisions
- 1. Using Rust as Backend
- 2. Using SSR instead of an SPA as Frontend
- 3. Choosing PostgreSQL as the Database Solution
- 4. Using In-Memory Scheduling for Email and Calendar Workers instead of Cron Jobs
- 5. Using sqlx for Database Integration
- 6. Using askama as the Templating Engine
- 7. Other Deployment Decisions
- Challenges and Solutions
- Further Improvements
- Afterthoughts
This post is licensed under CC BY-NC-ND 4.0.
Expand
What: A full-stack app with a Rust backend to prioritise message delivery with configurable urgency, SSR frontend, and email + calendar integration.
Why: To solve a personal problem around message prioritisation and experiment with Rust backend architecture.
Stack: Rust (axum, tokio, sqlx, askama), PostgreSQL, Vanilla JS + Bootstrap.
Philosophy: Overengineered on purpose to gain experience in industry-standard solutions.
Status: MVP built and deployed in under a day, followed by QoL and scalability improvements. v1.0.9 feature-complete.
Future: React/Svelte SPA, OAuth2, more testing/logging.
Introduction
This is an interim writeup for Enviame, drafted after v1.0.9.
Enviame is a simple project designed to solve my real-life problem of message prioritisation, while also serving as an opportunity to build a full backend in Rust from scratch (with the help of various runtimes and libraries).
Motivation
At the core of all productivity techniques is a simple principle: minimise distractions to maintain focus. That’s why phones have Do Not Disturb modes and silence buttons.
The problem is, these tools often create an all-or-nothing scenario - either everything gets through, or nothing does. While some platforms offer limited solutions - like Apple’s iMessage showing when someone has silenced notifications and letting you bypass it if needed - most of us still rely on inconsistent systems like "call if it’s urgent", which is often unreliable. For example, you would be fine with a friend calling about his lost ID during a focus study session, but not during an online interview or a sports race. After all, some fires burn hotter than others.
Enviame is designed to introduce more nuanced control, enabling different levels of message priority to ensure only the right messages get through at the right time.
Development Journey
All core features, with the exception of sending notification emails, were completed within the first 3 hours of development. The email features were then implemented within around 3 hours, making a working MVP within a day. Multiple hours were then spent setting up deployment pipelines (as this was new to me) and debugging the email features (which turns out was just inconsistent), but by v1.0.3 the project was already quite complete.
Most of the time afterwards was spent on QoL improvements, for example improving the frontend, minimising dependency usage, etc. The two largest features between v1.0.3 and v1.0.9 were the message delivery status display and the calendar status display, which were certainly not the largest features.
Design Philosophy
You may notice that the project is heavily overengineered, for example using askama for templating, using HTML minifiers, using cryptographically secure hashing for message queries, and having email and calendar worker threads, when obviously I won't have 1000 friends sending me messages at once and obviously they aren't malicious hackers trying to know whether someone else's message got delivered. All of these were very much intentional.
The goal here is to gain experience in industry-standard solutions to these types of problems, instead of landing with ad hoc strategies that would only work for a few dozen users.
Making the deployment process smoother was also one of the goals that became increasingly important after an accident wiped the entire production server (oops). After v1.0.8, the building process was moved to the CI/CD workflow so there is minimal setup and configuration required on the server side.
Technical Overview
Enviame is a Server Side Rendered (SSR) Multi-Page Application (MPA) with AJAX.
Main Features
- User Authentication: Easy login and registration, secure and persistent storage with PostgreSQL
- Email Delivery: Immediate notification delivery with SMTP, with multiple priority options
- Calendar Status: Integration with the iCalendar protocol, displays current status and expected contact hours
Tech Stack
- Backend: Rust
axum: Web Frameworktokio: Asynchronous Runtimelettre: SMTP Email Deliverysqlx: Database Interactionaskama: Static HTML Templating
- Database: PostgreSQL
- Frontend:
- Static HTML
- Vanilla JS
- Bootstrap UI
Design Decisions
1. Using Rust as Backend
Rust has some obvious advantages over the more popular backend options such as Node.js or Python. It is a static, compiled language with complete type safety, the ownership model is great at preventing data races in a multi-thread context, and the Cargo ecosystem is rich enough for tasks like sending emails or converting datetimes to feel like a cakewalk. Not only is runtime significantly faster but the developer is also significantly less likely to make mistakes.
The project was intended for me to gauge the difficulty of writing an actual multi-threaded server in Rust. So far, I am delighted by what Rust has to offer.
2. Using SSR instead of an SPA as Frontend
This was a debatable decision. A simple excuse would be that the app only has a few pages and building an entire separate frontend would be unnecessarily complex, but that would contradict the design philosophy of using the "industry standard solution", SPAs are, at least as of right now, definitely the de facto standard for a frontend.
To put brutally, I am a systems/backend developer, and the initial motivation for the project was to experiment with a Rust backend. I did not plan to nor have the time to spend on the frontend, while also adding complexity to the backend structure since I now have CORS to worry about.
AJAX was indeed slightly less enjoyable and perhaps more verbose than using Svelte or React, but for a "two-form" app like this, it was definitely a good trade-off. I would say, if some day I start to make an application with more than a dozen pages, or if those pages are more complex than one or two forms and half a dozen buttons, then an SPA would probably be the way to go.
3. Choosing PostgreSQL as the Database Solution
Honestly, not a lot of thought went into this decision. PostgreSQL is pretty much also the de facto database for modern large-scale applications. Given the design philosophy of scalability, this was a no-brainer.
Given that the project did not use any of PostgreSQL's advanced features, an alternative would be SQLite, which is more portable and likely faster and easier to set up.
4. Using In-Memory Scheduling for Email and Calendar Workers instead of Cron Jobs
Cron jobs will be more lightweight, have less CPU usage and is generally a more popular option for production systems. I opted for in-memory scheduling since I don't think the difference in the context of this project is significant enough to justify the more complex setup (e.g., including an entire API endpoint so that the cron job can update the calendar status).
The disadvantage is, if the app panics and aborts, the remaining emails won't get sent and the calendar won't be updated. However, given the decision to use SSR, the app wouldn't even display a home page if the process is not running.
5. Using sqlx for Database Integration
This was a simple decision. Even though sqlx pulls in more dependencies than something like postgres, the static typechecking for the database along with the support for timestamptz (timestamp with timezones) makes the development experience a lot easier and a lot more enjoyable.
6. Using askama as the Templating Engine
Also a simple decision. askama performs template syntax checks during compile time (as opposed to runtime for tera), meaning the app is both safer and faster at runtime.
askama also has simple and straightforward syntax (which, surprisingly, html-minifier also had no issues handling), along with strong typing.
Comparing askama and maud, in retrospect the latter might be a better option for performance, but porting from multiple HTML files (with duplicate sections) was likely easier for askama - I cannot tell for sure.
7. Other Deployment Decisions
Since a lot of decisions were made during the deployment process, here's a summary with justification:
- Self-hosting Database instead of Supabase or Firebase: Easier to debug, cleaner structure
- Custom Build Configuration: Prioritise runtime speed over compile speed
- Minimal Dependencies: Still optimise for compile speed and binary size
Challenges and Solutions
SMTP Email Delivery
After experimenting with SendGrid, I decided that it was way too unreliable, almost always going to spam, which defeats the purpose of notification emails or message receipts.
Since I am managing my domain emails via iCloud, I decided to experiment with iCloud's SMTP. It rarely went to spam and testing succeeded so I kept it as my option.
In the following days, there are multiple occurrences of emails being delivered on the next morning or straight up not being delivered at all, without any error logs on the server. When manually inspected, it seems that iCloud would also return a 202 Accepted for these emails but for some reason they are either rejected or queued for unreasonably long hours.
After some painful debugging, I found out that these "delayed" emails have a iCloud spam header set to true. It turns out, as my email template was a few lines of plain text and my test messages were often random letters, iCloud would mark some of them, especially repeated attempts of the exact same message, as spam. Presumably these then went through human review, hence some messages went through hours later while others got rejected.
This was fixed very easily by using a proper HTML template for my email, and setting the content to HTML instead of plain text.
It was somewhat irritating to debug this. Since the delivery behaviour was inconsistent (and not 100% failure), there were multiple times where I wrongly deduced the cause of the bug due to two attempts one succeeding and one failing - when in reality it's just because iCloud decided to mark the latter as spam.
sqlx and CI/CD
This is what happens when you ask ChatGPT to generate the code for you instead of reading the docs. I asked ChatGPT for the standard template to connect to and send queries to a PostgreSQL database, it told me I could use the sqlx::query! macro, I tested it out and realised it supported compile-time type-checking, then I just left it in the code.
When the time to deploy comes, I decided that a standard build and deploy CI/CD pipeline wouldn't work, since the GitHub Actions runner doesn't have a postgres database with the proper schema. Therefore, I opted for the more complicated solution, which is to write an ad hoc bash script on the server to handle updates.
After deciding in v1.0.8 to make the deployment process smoother, I wrote a ridiculous workflow that created a postgres database, applied the schema, then used that to compile the binary. I again asked ChatGPT to generate the workflow for me, and luckily, this time it decided to give me sqlx prepare for some unknown reason (it's incorrect syntax-wise, and it's also not the appropriate command in this context...). Curiosity got the better of me and after researching I figured out that there has always been an "offline" mode for the type-checking performed by sqlx. You can use cargo sqlx prepare to generate the cache, making a normal deployment workflow possible as long as this cache is checked into version control.
Stupid mistake, gladly it got fixed, now I know better.
Further Improvements
This section describes possible further improvements that likely won't be implemented. For features in development and planned for v1.1.0, see v1.0.7 release notes.
- Add unit tests and integration tests
- Add proper logging in different levels
- Implement OAuth2 for third-party login options
- Separate the frontend and refactor to an SPA framework like React or Svelte
- Improve database scalability by implementing sharding or read replicas
Afterthoughts
Enviame started as a single-page tool with one endpoint to deliver messages. Then token-based login was added to simplify UX, then registration was added, then email delivery with different priorities was added... Before long it became a playground for me to experiment with various industry-standard solutions, and it was definitely an enjoyable ride. Most of the fun came from the entire process of envisioning, designing, and building from scratch, then stumbling across a blog post explaining the exact same thing.
Rust definitely made the development process a lot smoother and more enjoyable compared with similar Python servers I've written in the past. Having carefully studied TRPL last year, I hardly encountered any Rust-specific issues. Even when the borrow checker occasionally complained, it's usually an immediate realisation and an easy fix. The type system and the ownership model definitely saved me hours or days of painful debugging.
The experience with CI/CD was slightly painful but fun, while the experience with SMTP was mostly just painful. Still, it's the kind of chaos I signed up for, and it's definitely satisfying to discover the root cause of the issue through layer-by-layer debugging, as underwhelming as "test emails were blocked by iCloud's spam filter" sounds like.
All in all, a very rewarding side quest.