<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Davide Imola</title>
    <link>https://davideimola.dev</link>
    <description>Software engineering, infrastructure, security, and open source — by Davide Imola.</description>
    <language>en-us</language>
    <lastBuildDate>Sat, 18 Apr 2026 04:12:50 GMT</lastBuildDate>
    <atom:link href="https://davideimola.dev/rss.xml" rel="self" type="application/rss+xml"/>
    
    <item>
      <title>Stop Prompting. Start Thinking.</title>
      <link>https://davideimola.dev/blog/stop-prompting-start-thinking</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/stop-prompting-start-thinking</guid>
      <description>Not a tutorial. Not a list of prompts. This is the mental model I built working with AI: what works, what I changed my mind about, and why the same method applies whether I&apos;m writing code or writing a blog post.</description>
      <content:encoded><![CDATA[
This post exists because of a problem I've always had: when I sit down to write something (an article, a post, anything), the words don't come out right. My reasoning compresses. I end up with something weaker than what I actually think.

I'm a talker. I do my best thinking out loud.

For a while I tried to fix this with AI: give it a topic, get back a draft, edit it into something mine. It worked okay. But the output felt generic, detached. It didn't sound like me.

Then I noticed something: the same problem was showing up in my development work. I'd hand a task to an AI tool, get back code that was technically correct but didn't feel right. Wrong abstractions, missing conventions, no real judgment. The AI was generating, not thinking.

The fix in both cases turned out to be the same thing: stop treating AI as a generator. Start treating it as a thinking partner.

This post is about how I made that shift and the workflow I built around it. It started as a method for development, but I've since applied it to writing, editorial work, branding. The underlying principle is the same everywhere.

_(And yes, this post was written using that exact method. I spoke, the AI asked questions, pushed back, and helped shape what I was saying into something readable.)_

## Think first. Then open the AI.

The biggest mistake I see people make with AI is reaching for it too early.

Before I type a single message, I read the requirements myself. I think about what needs to be done, what the tricky parts are, what I'd need to clarify with the PM or with myself. I form my own mental model of the problem.

Only then do I bring in the AI.

This isn't a ritual. It's practical. If I don't understand the task, I can't explain it well. And if I can't explain it well, I can't respond usefully when the AI starts asking hard questions. The quality of the entire session depends on that first ten minutes of thinking alone.

There's also something else: when you understand the problem yourself first, you stay the senior in the room. AI tools are fast, confident, and occasionally wrong. If you haven't thought it through, you have no filter for the output.

## Big picture first, then drill down

Once I start working with an AI, I never jump straight to implementation details.

I start broad: here's the task, here's the context, here's roughly what I think needs to happen. Then I use a skill called `grill-me`: a Socratic dialogue where the AI asks hard questions instead of immediately generating a plan. It pushes back, challenges assumptions, asks "why" a lot.

This phase is where most of the real work happens. Questions asked upfront eliminate entire categories of rework later. A wrong assumption caught during design costs nothing. The same assumption caught during implementation costs a lot.

The key insight is that going deeper on design isn't wasted time. It's the most leveraged time in the whole process. I used to feel like I was procrastinating by not writing code yet. Now I treat this phase as the actual work.

One recurring problem I've had: even with a solid PRD, individual user stories get lost during implementation. You start a session, the AI tackles the obvious parts, and by the end something that was clearly in the spec is just... missing. Tools like [Ralph](https://github.com/snarktank/ralph) exist specifically for this: an autonomous agent loop that works through a PRD story by story, tracking progress across iterations so nothing falls through the cracks. I haven't fully integrated it yet, but the problem it solves is real and I've felt it.

## Voice unlocks context you didn't know you had

During design discussions, and honestly throughout the whole process, I use voice input instead of typing.

Not because typing is slow. Because speaking makes me think differently.

When I'm reading a response and the answer isn't a simple yes or no, when there are tradeoffs, nuances, things that depend on context I haven't fully explained, I switch to voice. Speaking out loud forces me to reason in real time. I self-correct mid-sentence. I remember things I forgot to mention. I add context that would have stayed locked in my head if I'd just typed a terse reply.

Voice also makes longer, richer responses feel natural. Nobody wants to type three paragraphs. But talking for thirty seconds? Easy.

The result is better input, better follow-up questions, and output that's more grounded in what I actually want.

This applies to writing just as much as to development. I'm not a natural writer. The channel of "sit down and type an article" compresses my thinking instead of expanding it. Speaking to an AI as an editorial partner, having it ask me questions, challenge weak angles, surface the structure in what I'm saying: that's what made this post possible. The ideas were always there. I just needed a different way to get them out.

## Skills and context files are guardrails, not magic

When I first started using AI for development, I'd give it a task and it would produce code that worked. Technically correct. But not written the way I'd write it. No tests, or tests that weren't meaningful. Styling that didn't match the system. Abstractions that solved the problem but ignored the conventions already in the codebase.

The fix wasn't better prompts. It was better context.

I now have a `CLAUDE.md` file in every project: a context document that explains the stack, the conventions, the design tokens, what not to do. It gets read at the start of every session and it changes everything. The output starts looking like code that belongs in the codebase.

Then I started using skills. A skill is a prompt template that encodes a specific process. Instead of explaining how I want something done every time, the skill carries that knowledge. `tdd` runs a test-first development loop. `simplify` does a post-implementation review for code quality. `grill-me` runs the Socratic design session.

I was skeptical of domain-specific skills at first. I thought they'd be overkill. Then I loaded a frontend design skill and the output stopped looking generic. I changed my mind.

### BYOS: Build Your Own Skills

The most powerful thing you can do is write skills for your own workflows.

The skills I use most aren't the built-in ones, they're the ones I wrote myself. `write-blog-post` (the skill behind this post) doesn't write for me. It interviews me, pushes back on weak angles, helps me find the structure in what I'm saying. `social-post` takes a finished post and turns it into platform-specific content for LinkedIn and BlueSky. I have a skill for [Worky](https://github.com/davideimola/worky) (an open-source project I'm building) that helps define issues in a way that's specific to that project's conventions.

All of these are in my public repo if you want to look at them.

The pattern is: when you find yourself repeating the same setup instructions at the start of a session, that's a skill waiting to be written. Once you write it, it works exactly the same way every time, for you and for anyone else on the project.

The real unlock isn't any individual skill. It's the realization that you can codify your own processes.

## A note on tools

I use Claude Code as my main AI tool, specifically in agentic mode with Claude as the driver on implementation tasks while I stay in the PM seat. But the principles in this post aren't Claude-specific. The mental model, the voice habit, the context files, the skill pattern: these transfer to any AI tool that lets you shape the interaction.

Agentic workflows (where the AI executes multi-step tasks autonomously) are where things are heading, and they make the guardrails even more important. The less you're in the loop on individual steps, the more your context files and skills need to encode your standards upfront.

## The review loop

After implementation, I don't just run the tests and move on. I do two reviews in parallel: mine and the AI's.

I look at the code myself first. Then I ask Claude to review it too, specifically looking for things I might have missed: security issues, edge cases, patterns that work but don't belong. Then I compare. Some findings overlap. Some are mine only. Some are the AI's only. The merged list becomes the input for the next iteration.

This back-and-forth is where a lot of quiet improvements happen. It's also where I catch the things that are technically correct but feel wrong: an abstraction that solves the problem but adds cognitive overhead, a test that passes but doesn't actually verify the behavior I care about.

There are tools that try to automate this loop entirely. I'm curious and plan to experiment, but I'd want to keep my own review at the end regardless. Not every finding the AI surfaces is worth acting on, and not every real problem shows up in an automated scan. The loop is valuable. Removing the human from it entirely is a different bet.

## You are still the senior developer

Everything above is in service of one principle: the AI gives feedback. You evaluate it.

Don't accept output because it looks confident. Don't accept a plan because it's detailed. When the AI produces something, read it. Think about whether it's what you actually wanted and whether it's written the way it should be. When you disagree, say so. Be specific about your reasoning: "I'd have done this differently, here's why, what do you think?" This usually produces a better answer than the original.

AI-assisted work works when the human is still driving. The moment you stop exercising judgment, you're not working with an AI partner. You're just running an autocomplete on your codebase.

---

This is my workflow as it stands today. It'll evolve. I'm still figuring out how to make the review loop tighter, and I'm experimenting with new skills as I find new patterns.

If you work differently, or if something here doesn't make sense for your context, I'd genuinely like to hear it. Leave a comment or find me on LinkedIn.

## Resources

The video that originally inspired a lot of this workflow (and where I first came across most of these skills):

- [AI-Assisted Development workflow by Matt Pocock](https://youtu.be/hX7yG1KVYhI)

The core skills I use, all open source:

- [`grill-me`](https://github.com/mattpocock/skills/tree/main/grill-me)
- [`write-a-prd`](https://github.com/mattpocock/skills/tree/main/write-a-prd)
- [`prd-to-issues`](https://github.com/mattpocock/skills/tree/main/prd-to-issues)
- [`tdd`](https://github.com/mattpocock/skills/tree/main/tdd)

My own custom skills (write-blog-post, social-post, and others) are in [my public repo](https://github.com/davideimola/davideimola.dev/tree/main/.claude/skills).
]]></content:encoded>
      <pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>AI</category>
        <category>Developer Experience</category>
        <category>Productivity</category>
        <category>Tooling</category>
    </item>
    <item>
      <title>I Rebuilt My Site Twice. Here&apos;s What the Second Time Taught Me.</title>
      <link>https://davideimola.dev/blog/i-rebuilt-my-site-twice</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/i-rebuilt-my-site-twice</guid>
      <description>Two rebuilds, two different tools, two completely different experiences. The second one worked. Not because of the AI.</description>
      <content:encoded><![CDATA[
I rebuilt my personal site. Then I rebuilt it again.

The first time I thought the problem was the old site. The second time I realized the problem was how I was working.

## Two versions, same problem

The original site was built in Astro. I chose Astro because I wanted to experiment with something new. I'd been doing mostly backend and infrastructure work at the time, and my frontend skills were, let's say, still developing. The result was functional and completely forgettable: white text on a black background, no real personality, nothing that said anything about who I was or what I did.

![The original Astro site: dark, minimal, forgettable](/images/blog/i-rebuilt-my-site-twice/old-site.webp)

So I decided to rebuild it using AI. I had access to Cursor through work, so I started there. The workflow was straightforward: describe what you want, get a plan, let it generate most of the site at once.

It was better. Some things were genuinely nice: red underlines styled like brushstrokes, a layout that felt more composed. But it was also a mess. Too many color variations that didn't quite agree with each other. And at some point, because I'd mentioned that I love Japanese culture and aesthetics, the AI decided the obvious move was to add **kanji characters** to the design.

![The Cursor rebuild: more styled, but still inconsistent](/images/blog/i-rebuilt-my-site-twice/cursor-site.webp)

I don't speak Japanese. I removed the kanji.

The core issue wasn't the kanji. That was just the most visible symptom. The real problem was that I'd handed over a vague brief ("I like Japanese aesthetics") and gotten back a literal interpretation, with all the inconsistencies that come from generating a whole site in one go. **The design system wasn't solid because I'd never defined one.** Everything was a bit improvised.

Still not working. But now I knew why.

## Starting over with a different question

I could have kept patching the Cursor version. Instead I started from scratch. Partly out of frustration, mostly out of curiosity. I'd been reading and watching more about AI-assisted development, and I wanted to try a fundamentally different approach.

The question wasn't "can AI build my site?" I already knew it could. The question was: **what happens if I stop treating AI as a site generator and start treating it as a collaborator?**

I switched to Claude Code and gave myself one constraint before touching a single component: I would define the design system first.

## Design system first, everything else second

This sounds obvious in retrospect. It wasn't obvious to me at the time.

I'm not a designer. I can look at something and know whether it works, but I struggle to build the underlying system that makes it consistent. What I needed wasn't something to generate components. I needed something that would push back when my decisions were incoherent.

So I started with tokens: background colors, text hierarchy, border values, a single accent color (Akane red, `#C91F37`). I set strict rules. The AI started flagging things I wouldn't have caught on my own: too many color variations, inconsistent visual language, patterns that looked fine in isolation but clashed in context.

> "Too many colors. Pick a few that work together and stick to them."

That kind of feedback is what a designer gives you. I was getting it from an AI, but only because I'd asked the right questions and **constrained the scope**. Working on one isolated part of the project at a time, not the whole thing, made the feedback loops tight and the outcomes predictable.

The result was a design system I actually understood. Building the site on top of it was a completely different experience.

## The voice input accident

Somewhere in the middle of the second rebuild, I started using voice input.

I didn't plan to. I'd seen someone mention it in a video, tried it out of curiosity, and kept using it because it worked better than I expected.

The difference isn't speed. Typing is faster for precise, short instructions. The difference is **how you think**. When you speak, you reason out loud. You self-correct in real time. You catch yourself saying "actually, no, wait, what I really mean is..." and that process of rephrasing turns out to be genuinely useful.

I started with the voice input built into Claude, then experimented with other tools. The prompts I produced with voice were longer, more specific, and more honest. I said things I wouldn't have bothered typing.

I'm not claiming it's some revolutionary technique. It's just a different input method that, for me, unlocked a different way of communicating with the AI. Your mileage may vary.

## The result: a terminal that works

The site is live at [davideimola.dev](https://davideimola.dev).

![The current davideimola.dev — terminal aesthetic, dark design system, character](/images/blog/i-rebuilt-my-site-twice/current-site.webp)

The thing I'm most happy with is the **terminal aesthetic**: commands as navigation, monospace headings, a design language that feels like it was made by a developer, not by someone using a portfolio template. That direction came from the AI noticing what I kept gravitating toward and suggesting something more committed.

It has character now. The old site didn't.

The tech stack is Next.js, Tailwind v4, TypeScript. Everything I'd have chosen anyway, but this time built on a design system that makes the codebase coherent instead of a collection of decisions made in different moods.

## What's next

The rebuild was the experiment. What I took away from it was a method: a way of working with AI that I've since applied to other projects.

I extracted that method into something replicable. That's the next post.
]]></content:encoded>
      <pubDate>Fri, 27 Mar 2026 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>AI</category>
        <category>Developer Experience</category>
        <category>Next.js</category>
        <category>Personal</category>
    </item>
    <item>
      <title>Beyond Resolutions: 2025 Retrospective &amp; The 2026 Systems</title>
      <link>https://davideimola.dev/blog/2025-retrospective-2026-outlook</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/2025-retrospective-2026-outlook</guid>
      <description>A different kind of retrospective. Instead of a chronological list, I explore the themes, systems, and impact matrices that defined 2025 and architect the outlook for 2026.</description>
      <content:encoded><![CDATA[
## The "Word" of 2025

2025 wasn't just another year; it was a year defined by **Evolution**.

> This year marked a significant shift in my career and personal life. Unlike my [2023](/blog/2023-retrospective) and [2022](/blog/2022-retrospective) reviews, it wasn't about learning a new framework or shipping a side project; it was about maturing. From my role as a Senior Software Engineer to stepping up as a Team Leader at [RedCarbon](https://redcarbon.ai), and realizing a lifelong dream in Japan, 2025 was the year I moved from "doing" to "leading" and from "wishing" to "experiencing".

---

## Impact Matrix (Highlights & Lowlights)

Instead of a chronological list, here are the moments that mattered, categorized by their impact on my life and career.

### High Impact / High Joy (The Wins)

_These are the moments I want to multiply in 2026._

- **Family Milestone:** Celebrating my girlfriend's graduation was a huge moment for us. Seeing her hard work pay off was one of the happiest days of the year.
- **RedCarbon Series A & Growth:** Finally, the company is really gearing up. Closing our Series A round brought a huge wave of energy. Being part of this growth and transitioning into a Team Leader role at [RedCarbon](https://redcarbon.ai) has been incredibly rewarding. A massive shoutout to the entire team—we achieved this together (and the retreat at **Lake Viverone** was the perfect celebration!).
  ![Photo of the RedCarbon Team on December 2025](/images/blog/2025-retrospective/redcarbon_team.webp)

* **Linkin Park - First Live Concert:** I attended my first-ever live concert at I-Days, and it was legendary. Seeing **Linkin Park** return to the stage years after Chester Bennington's passing was emotional and powerful. The energy was indescribable.
  ![Photo of Linkin Park Concert at I-Days](/images/blog/2025-retrospective/idays_lp.webp)

- **Japan - The Dream Trip:** In June, I finally fulfilled my dream of visiting Japan. It was incredible—a perfect mix of history and "nerd" culture. Visiting **Sensoji**, the first temple I saw, remains a core memory, along with collecting _Goshuin_ and exploring the arcades. It wasn't just a vacation; it was an experience that left me wanting more.
  ![Photo of Sensoji Temple by Night](/images/blog/2025-retrospective/sensoji.webp)
- **OS Day 2025:** Organizing this year's edition of [OS Day](https://osday.dev) was a massive amount of work, but the payoff was huge. The feedback from speakers and the audience was overwhelmingly positive. The retreat in **Umbria** with the Schroedinger Hat folks was the perfect way to recharge and connect deeper with the community.
  ![Photo Group of Speakers and Organizers at OS Day 2025](/images/blog/2025-retrospective/osday25.avif)

### The "Shift" (High Impact / New Challenge)

_Necessary evils that moved the needle._

- **The Career Dilemma (Technical vs. Management):** This year I faced a big question: Technical Lead or Staff Engineer? I decided to step into the Team Lead role to explore the management side. The driving force isn't title, but impact: I want to support the team's growth and help my Engineering Manager by sharing the strategic load. It’s challenging, but clarifies my path between pure management and technical leadership.

### The "Lesson Learned" Moments

_Failures or deviations that taught me something valuable._

- **Stepping Back from the Stage:** I participated in very few events this year and none as a speaker (shoutout to GO Lab where I attended just to enjoy the community!). I learned that it's okay to pause public speaking to focus on career growth. You can't do everything at 100% all the time.

---

## The "System" Upgrade (v2026)

How my "Personal OS" evolved this year.

### Tools of the Trade

- **AI-Powered Development (Antigravity & Cursor):** Towards the end of the year, I refreshed this very website using AI tools like Cursor and Google's Antigravity. It was an experiment to see what "low-code/high-ai" could do, and the results are interesting. (Expect a full technical deep dive on this in early 2026!).

### Mindset Shifts

- **Exploration over Definition:** I'm not locking myself into a "Manager" or "IC" box yet. I'm using this TL role to learn skills I didn't have (delegation, project planning) while keeping my technical roots alive.

### Habits That Stuck

- **Ju Jutsu (Hontai Yōshin-ryū):** I took a few trial lessons in December at [Yawara](https://www.camyawaravr.it/wp/) and was instantly hooked. It's my commitment to physical health (weight loss goals!) and mental discipline. I'm enrolling officially in Jan 2026 to start this journey for real!

* **BBQ Mastery:** I’ve leveled up my cooking game significantly this year. From ribs to huge steaks and picanha, the BBQ became my weekend meditation (and the best way to feed friends).

---

## Nerd Culture & Media Diet

It wouldn't be a retrospective without the "fun" stats. 2025 was a great year for stories.

### Gaming & Tabletop

- **Video Games:** I verified the 100% completion of _Clair Obscur: Expedition 33_ (absolute masterpiece) and I'm currently wandering the lands of _Ghost of Tsushima_.
- **Board Games:** _Pandemic Legacy_ took the crown this year.
- **D&D:** Played two campaigns that I loved. First as **Silas Ramsay** (Elf Rogue), and then as **Thalion Melora** (Half-Elf Bard).

### Watchlist

- **Documentaries:** _My Octopus Teacher_ and _Free Solo_ — both incredibly inspiring for different reasons.

---

## 2026 Outlook: Architecting the Future

### The North Star

**Consolidation & Content** -> In 2026, I want to solidify my leadership role and share my journey more consistently with the community.

### Areas of Focus

1.  **Career Clarity:** Continue navigating the TL path. By the end of 2026, I want a clearer answer to the "Manager vs. Staff Engineer" question.
2.  **Content Consistency:** I want to be more present socially. More technical blog posts, LinkedIn updates, and newsletters. Sharing what I learn, as I learn it.
3.  **Physical & Mental Discipline:** Consistency in Ju Jutsu. It's not just a sport; it's a lifestyle change I'm ready for.

---

## The Anti-Goals

To make space for the above, I am explicitly deciding **NOT** to do the following in 2026:

- **Stress-Driven Speaking:** I'd love to return to the stage, but only if it's fun and sustainable. No more speaking just for the sake of it ("nice to have", not "must have").
- **Stagnation:** I refuse to settle. Whether it's pushing for the Series A goals, exploring AI, or earning my first belt, 2026 is about forward motion.

---

## Let's Connect

How are you approaching 2026? Are you setting resolutions or building systems? I’d love to hear your "Word of the Year" or any advice you have for a new Technical Leader. Let’s chat on [BlueSky](https://bsky.app/profile/davideimola.dev) or [LinkedIn](https://www.linkedin.com/in/davideimola/)!
]]></content:encoded>
      <pubDate>Wed, 31 Dec 2025 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Retrospective</category>
        <category>Year Review</category>
        <category>Systems</category>
        <category>Productivity</category>
    </item>
    <item>
      <title>Why Git Visibility Matters More Than Git Mastery</title>
      <link>https://davideimola.dev/blog/git-visibility-git-mastery</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/git-visibility-git-mastery</guid>
      <description>Git problems rarely come from missing commands, but from missing visibility into what is really happening in the repository.</description>
      <content:encoded><![CDATA[
Most developers assume that getting better at Git means learning more commands.

Interactive rebases. Advanced resets. Obscure flags you only use once a year.

Those skills are useful, but they are rarely what prevent real problems.

In practice, most Git issues happen because developers lack **visibility**, not because they lack knowledge.

## Git Mastery Is Not the Same as Git Awareness

You can know Git very well and still make bad decisions if you do not clearly see what is happening.

Common examples include:

- Committing to the wrong branch under time pressure
- Rebasing shared history without realizing who else is affected
- Merging changes without understanding how branches diverged
- Reviewing code without seeing the broader context of the work

None of these are caused by missing commands. They are caused by missing context.

## Invisible Problems Are the Most Expensive Ones

![Invisible Risks in Git](/images/blog/git-visibility-git-mastery/invisible-risk-blog.webp)

The most costly Git mistakes are usually discovered late:

- After a merge conflict explodes
- During a rushed hotfix
- When a release branch suddenly diverges from reality

At that point, Git does exactly what it was designed to do. It preserves history. It enforces consistency.

The problem is that humans struggle to reason about invisible state.

## Visibility Changes How You Think About Git

![Visibility Illuminating Git Structure](/images/blog/git-visibility-git-mastery/visibility-structure-blog.webp)

When Git state is visible, your mental model improves automatically.

Seeing a commit graph helps you:

- Understand why a conflict exists before resolving it
- Spot risky merges early
- Reason about history as a system, not a sequence of commands

This reduces cognitive load. You spend less time reconstructing what happened and more time deciding what to do next.

This matters even more in teams, where context is distributed across people, time zones, and tools.

## Where AI Fits In, Realistically

![AI Compressing Context](/images/blog/git-visibility-git-mastery/context-compression-blog.webp)

AI does not make Git simpler by default.

What it can do is **compress context**:

- Summarize what changed across multiple commits
- Explain large diffs in plain language
- Highlight unusual patterns in history

Used this way, AI supports visibility rather than replacing understanding.

If you already have clear context, AI makes you faster.  
If you do not, AI just gives you confident-sounding guesses.

## Git Problems Are UX Problems

Many Git frustrations are really usability issues:

- Too much information in the wrong format
- Important signals buried in logs
- Tools optimized for machines, not humans

Improving Git workflows often means improving how information is presented, not adding more rules or commands.

Better visibility leads to better decisions, even with the same level of technical skill.

## The Real Skill to Develop

Git mastery looks impressive, but Git awareness is what keeps teams productive.

If you want fewer incidents, cleaner history, and calmer reviews, focus on making Git state visible and understandable.

Once you can see what is happening, the right command usually becomes obvious.
]]></content:encoded>
      <pubDate>Sat, 20 Dec 2025 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Git</category>
        <category>Security</category>
        <category>AI</category>
        <category>Developer Experience</category>
    </item>
    <item>
      <title>AI Will Not Secure Your Codebase. But It Can Reveal Dangerous Git Habits.</title>
      <link>https://davideimola.dev/blog/ai-will-not-secure-your-codebase</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/ai-will-not-secure-your-codebase</guid>
      <description>AI won&apos;t secure your codebase, but it can expose the risky Git habits that quietly turn your repository into part of the attack surface.</description>
      <content:encoded><![CDATA[
When people talk about AI and cybersecurity, the conversation often jumps straight to threat detection, malware analysis, or automated remediation.

But one of the most overlooked attack surfaces is much simpler:
your Git history.

Most security incidents do not start with sophisticated exploits. They start with small workflow mistakes that accumulate quietly over time.

## Git History Is Part of Your Attack Surface

From a security perspective, Git repositories contain more than code:

- Secrets committed by mistake
- Debug flags left enabled
- Experimental changes merged too early
- Force-pushes that erase audit trails

None of these issues are exotic. They are everyday Git habits.

![Hidden Secrets in Git History](/images/blog/ai-will-not-secure-your-codebase/hidden-secrets-blog.webp)

Once pushed, Git history is hard to truly erase. Even removed secrets may still exist in forks, clones, or CI logs.

## Where AI Can Actually Help Security

AI does not make Git secure by default. What it can do is surface risk earlier.

![AI Scanning Code for Risks](/images/blog/ai-will-not-secure-your-codebase/ai-analysis-blog.webp)

Used responsibly, AI-assisted tooling can:

- Flag commits that introduce credentials or sensitive patterns
- Summarize large diffs to help reviewers spot risky changes
- Highlight unusual history rewrites or abnormal commit behavior

This is not about replacing security reviews. It is about reducing the chance that human reviewers miss something obvious under time pressure.

## Visual Context Matters for Security Too

Security issues often hide in complexity.

A visual commit graph makes it easier to:

![Visual Commit Graph](/images/blog/ai-will-not-secure-your-codebase/visual-graph-blog.webp)

- Spot unexpected merges into protected branches
- Notice rebases that rewrite shared history
- Understand when and where sensitive code entered the repo

When security relies only on CLI output and logs, context is easy to lose. Visual tooling helps teams reason about risk, not just commands.

## AI Without Process Is a False Sense of Safety

AI cannot compensate for weak Git hygiene.

If your team:

- Lacks branch protection
- Rewrites history without discipline
- Treats reviews as a formality

Then AI will only produce better summaries of bad practices.

Security improves when AI is layered on top of:

- Clean, auditable history
- Clear branching rules
- Tooling that makes intent visible

## The Real Takeaway

AI will not secure your repositories for you.

But it can act as an early warning system that exposes risky Git behavior before it becomes an incident.

In cybersecurity, visibility is everything.
Git history is visibility you already have. You just need to treat it as such.
]]></content:encoded>
      <pubDate>Sat, 13 Dec 2025 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Git</category>
        <category>Security</category>
        <category>AI</category>
        <category>Developer Experience</category>
    </item>
    <item>
      <title>2023 Retrospective</title>
      <link>https://davideimola.dev/blog/2023-retrospective</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/2023-retrospective</guid>
      <description>A look back at 2023 - reflecting on goals, business growth, health, community involvement, learning, and plans for 2024.</description>
      <content:encoded><![CDATA[
Here it is, the end of 2023. What a year it has been. I've been thinking about what I want to do in 2024, but before I do that, I want to take a look back at 2023.

I am writing this retrospective for the second year in a row. I think it is a good way to reflect on the year and to think about what I want to do in the next year.

I am going to use more or less the same format as [last year](./2022-retrospective), but I am going to add a few more sections.

- Check-in on 2023 Goals
- Business
- Health
- Friends & Family
- Community, Networking, and Speaking
- Learning
- Hobbies
  - Series and Films
  - Music
  - Games
- Travel
- 2024
- Conclusions

## Check-in on 2023 Goals

I had a few goals for 2023. Let's see how I did. I am going to put a ✅ if I did it, a ❌ if I did not do it, and a 🟡 if I did it, but not as much as I wanted.

For each goal, I am going to write a few words about it.

- ❌ Eat healthier and try to lose some weight 🥒<br/>
  I did not do this. I did not eat healthier, and I did not lose any weight. I tried for a while, but I did not stick with it. I am going to try again in 2024, but for sure I need to go to an expert to help me with this.
- ✅ Stay healthy (you don't say) 👨🏻‍<br/>
  For sure I can improve this by reducing my weight, but I think I did a good job this year. I did not get sick, and I did not have any injuries.
- 🟡 Read more books 📚<br/>
  I read more books than last year, but I did not read as many as I wanted. I would like to read more in 2024.
- ✅ Improve my Skills (Golang, Kubernetes)👨🏻‍💻<br/>
  This thing is based on my feelings, but I think I improved my skills this year quite a lot. I have not been certified in Kubernetes yet, but I am working on it.
- 🟡 Start learning Rust 🦀 <br/>
  I started learning Rust, but I did not do it as much as I wanted. I am going to continue learning it in 2024.
- ✅ Finish the work in the house and move in with Sara 🏡 <br/>
  We did it! We finished the work in the house, and we moved in. We are very happy with the result. We still have some things to do, but we are living in the house, and we are very happy.
- ✅ Make a long trip (January is coming… Emirates-Qatar-Oman… 🛳️🤫) <br/>
  I went in a cruise in January in Emirates, and it was amazing. I am going to write a more detailed thing in the travel section.
- ✅ Try to organize something incredible for the community ❤️ <br/>
  I organized the [Open Source Day 2023](https://2023.osday.dev/), and it was amazing. I am going to write a more detailed thing in the community section.
- ✅ Have a speech at a conference or meetup <br/>
  Oh yes! I did it!!! I had the pleasure to speak to 5 different conferences. I am going to write a more detailed thing in the speaking section.

At the end, I am happy with what I did. I did not do everything I wanted, but I did a lot of things. I am going to try to do better in 2024.

Of course, my greatest delusion is not losing weight. But, as I already said, I am going to try again in 2024.

## Business

Pretty much the same as last year. I am still working at [RedCarbon](https://www.redcarbon.ai/), and I am still very happy with it.

My role is still the same, but I am doing a lot more things. I am working on a lot of different projects, and I am learning a lot of new things.

My commitment is always at my best, and I am truly happy that is recognized by my colleagues and my boss. In fact, I got a promotion and a bonus this year, and I am very proud of myself.

## Health

As the previous year, I need to focus more on this. I am not eating healthier, and I am not doing so much exercises.

I know that my actual status is not so good, a lot of friends told me that, and I know that I need to change it. I am going to try to do better in 2024.

## Friends & Family

I moved in with Sara (my girlfriend), and we are very happy. We are still working on the house, but we are living in it, and we are very happy. Finally, I can invite my friends to my house, where we can play board games, watch series, and have fun.
I am still in touch with my friends, I'm usually hanging out with them once a week, and I am very happy about it. Also, I made some new friends by organizing the Open Source Day and going to conferences.

In conclusion, I can say that I am pretty satisfied with this part of my life.

## Community, Networking, and Speaking

Here it is, one of the newest additions to the retrospective. I am going to talk about the community, networking, and speaking.
2023 was game-changing for me. I started to be more active in the community, and I started to speak at conferences.

First of all, I have been one of the organizers for the [Open Source Day 2023](https://2023.osday.dev/). It was a great experience, and I am very happy about the outcome. I met a lot of new people, and I learned a lot of new things. I am going to organize it with the SH folks again in [2024](https://2024.osday.dev), and I am going to try to make it even better.

[📸 Open Source Day 2023 on Instagram](https://www.instagram.com/p/CqVxKwnoWTB/)

Second, I had the courage to start proposing talks, and unexpectedly, I got accepted to speak at 4 different conferences. I am going to list them here:

- [DevOps Day](https://www.youtube.com/live/wli1Vv9f_uw?si=LA0T7njnt-MCskMS&t=11030)
- [Incontro DevOps Italia 2023](https://2023.incontrodevops.it/)
- [DevSecOps Day](https://2023.devsecopsday.it/)
- [GoLab 2023](https://golab.io/past-editions/2023)

This was a great achievement for me. By speaking at these conferences, I met a lot of new people, and I learned a lot of new things. I am going to try to speak at more conferences in 2024.

[📸 Speaking at conferences on Instagram](https://www.instagram.com/p/CpxsFq2INBr/)

Third, I have finally organized a few meetups in my hometown. Probably, this was one of the most beautiful things I did in 2023. The reasons are pretty simple. Verona is not a big city, and it is not easy to find meetups here. I wanted to change this, and I did it! I organized 3 meetups in 2023, thanks to AQuest, and I am going to organize more in 2024.

## Learning

Learning is one of the most important things for me. I am always trying to learn new things, and I am always trying to improve my skills.
This year is no different. I started to learn Rust, and I am going to continue learning it in 2024.

I also started to increase my knowledge about Kubernetes and Go, and finally I can say that I am pretty confident with them. Of course, I still have a lot to learn!

This year I also understand that I'd like to learn more about security, Artificial Intelligence, and mobile development. I am going to try to learn more about these topics in 2024.

## Hobbies

For this year retro, I'd like to give more space to my hobbies. I think it is important to talk about them, and I think it is important to write about them, so people can know me better.
I am going to talk a little more deeply about series and films, music, and games.

### Series and Films

Ok let's start with series and films. Starting from the series, I watched a lot of them, and I am going to list a few of them here, in no particular order like an honorable mention:

- [The Bear](https://www.imdb.com/title/tt14452776): What is this one?! Frustration, anger, stress, and a lot of other feelings. This is a short series, and it is very well done. It is a great cooking serie, but not only that. I am not going to spoil anything, but I can say that it is worth watching it.
- [DAHMER - Monster](https://www.imdb.com/title/tt13207736/): This is a great series. It is about the life of Jeffrey Dahmer, a very famous serial killer. It's not easy to watch it, but it is very well done.
- [The Good Place](https://www.imdb.com/title/tt4955642/): This is a comedy series, and it is very funny. I really enjoyed it.

Now, let's talk about films. I really love going to the cinema, and I really love watching films. I watched a lot of films this year, and I am going to list a few of them here, in no particular order like an honorable mention:

- [Oppenheimer](https://www.imdb.com/title/tt15398776/): Christopher Nolan. I should not say anything else. One of the best biographical films I have ever seen. Through the life of J. Robert Oppenheimer, we can see the story of the atomic bomb. I do love the way the film brings you some very deep thoughts.
- [Barbie](https://www.imdb.com/title/tt1517268/): I went to the cinema with my girlfriend to watch this film, but I did not expect it to be so good. Fun but also deep. I really enjoyed it.
- [Top Gun: Maverick](https://www.imdb.com/title/tt1745960/): I like the first Top Gun, so I watched this sequel. In my opinion, this is better than the first one.
- [The Northman](https://www.imdb.com/title/tt11138512/): I recovered this film this year. For sure, I can say I am a huge fan of the Vikings mythology, and this film does not disappoint me. I really enjoyed it.
- [The Lego Movie](https://www.imdb.com/title/tt1490017/): Another film that I recovered this year. I do love Lego, but I have always thought that this film was for kids. I was wrong. It is a very funny film with a lot of references to other series or films!

### Music

I think one image may be enough to describe my music taste. My Spotify Wrapped 2023:

![Spotify Wrapped 2023](/images/blog/spotify-wrapped-2023-blog.webp)

### Games

From my childhood, I have always loved playing games. From video games to board games, I love them all. Unfortunately, I am not very consistent with them. I play a lot for a few months, and then I stop playing for a few months.
This year, also because of the house renovation, I did not have a lot of time to play, but I played a few games that I really enjoyed.

- [The Legend of Zelda: Tears of the Kingdom](https://zelda.nintendo.com/tears-of-the-kingdom/): Breath of the Wild is one of my favorite games ever, so I was very excited to play this sequel. I pre-ordered it as soon as I could, and I played from the Day One. Unfortunately, I did not have the time to finish it, but I am really enjoining it. If BOTW was good, this one is even better.
- [Hogwarts Legacy](https://www.hogwartslegacy.com/): I am a huge fan of Harry Potter, and I am a huge fan of RPG games. This game is a dream come true for me. I can not explain the feelings I had when I saw for the first time Hogwarts in this game. Probably, is not the best game ever, but I can not be objective with this one. I love it.
- [Munchkin](https://amzn.eu/d/8Ro9N2w): As I said earlier, I also love bord games. I played a few of them this year, but I'd like to mention Munchkin. It is a very fun game to play with friends.
- [Si, Oscuro Signore!](https://it.wikipedia.org/wiki/S%C3%AC,_Oscuro_Signore!): This is an Italian game, and it is very fun to play with friends. This is a very interesting game because you have to create stories and characters. It is like a Dungeons and Dragons, but far more simple. You don't need a game master, and you don't need to create a character. You just need to play as a Goblin or the Dark Overlord.
- [Ka-Blab](https://amzn.eu/d/7NttaGo): This is one of my latest purchases. I got it quite by accident, as I was looking for a simple game to play with a few friends. It is a very simple game, but it is very fun to play.

## Travel

Finally, I can talk about travel. I love traveling, and I love visiting new places. This year I did not travel a lot, but I did a few things.
I had a cruise in January, and it was amazing. I visited a lot of new places!

I visited Dubai and Abu Dhabi in the Emirates. I went to the top of the Burj Khalifa, and it wasn't so amazing, because I suffer from vertigo. 🤦🏻‍
I also visited the Sheikh Zayed Grand Mosque, and it was by far one of the most luxurious places I have ever seen.

[📸 Travel on Instagram](https://www.instagram.com/p/CoCqiiPI__r/)

Everything is so big there. Including the malls. They also have a ski slope inside a mall. It is crazy!

In the same cruise, I also visited Doha in Qatar. Qatar was the country that hosted the 2022 FIFA World Cup, so I found a lot of things still related to football. As I understood from the locals, Doha is a very modern city, and it is growing very fast. So I think it is worth visiting it again in a few years.

Last, but not least, I visited Muscat in Oman. Unfortunately, I did not have a lot of time to visit it, but I really enjoyed it. I visited the Sultan Qaboos Grand Mosque, and it was amazing. I also visited the Mutrah Souq, and it was very interesting.

In this year, I also visited a few places in Italy. I visited Trento and Arco in Trentino Alto Adige. I have also been three times in Florence this year, I can say I am starting to know it very well!

Udine, or better, San Daniele del Friuli (for the ham) and Torino, are the other two cities I visited this year. I don't have a lot to say about them, but I would like to mention one thing about Torino. The Egyptian Museum, I had the opportunity to visit it with a guide thanks to my company. It is something out of this world. It gives me the chills just thinking about it. I can not explain the feelings I had when I saw the mummies. I think it is something that everyone should see at least once in their life. Now, I would like to return in Egypt to see the pyramids and the other things I did not have the opportunity to see.

## 2024

I think I have talked enough about 2023. Now, it is time to look forward to 2024. As the last year, I want to set some goals for the next year.
For the next year, I want to focus more on my health and my hobbies. I know, as a developer, it is not easy to find the time to do everything, but I think it is important to find the time to do the things you love.

So, let's start:

- I want to lose weight. I know it is not easy, but I want to try. I want to start going to walk more and try to eat better.
- My girlfriend has gifted me a board game of the Lord of the Rings. I want to organize a good campaign with my friends!
- Start playing at role-playing games again. But this time, I want to do it more consistently.
- I want to organize more and more meetups in Verona!
- I want to learn more about security, Artificial Intelligence, and mobile development.
- I would love to organize a better event for the next Open Source Day!
- Create more content for my blog, YouTube channel, and Twitch channel.

## Conclusions

And that's all. It seems that only yesterday I was writing the 2022 retrospective, and now I am writing the 2023 one. Time flies, and I think it is important to stop and think about what you have done in the last year. I think it is important to think about what you have done, and what you want to do in the next year.

I hope you enjoyed reading this post, and I hope you will have a great 2024!
]]></content:encoded>
      <pubDate>Thu, 28 Dec 2023 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Retrospective</category>
        <category>Year Review</category>
        <category>Community</category>
        <category>Learning</category>
    </item>
    <item>
      <title>Level UP your RDBMS Productivity in GO</title>
      <link>https://davideimola.dev/blog/level-up-your-rdbms-productivity-in-go</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/level-up-your-rdbms-productivity-in-go</guid>
      <description>Learn how to improve your RDBMS productivity in GO with sqlc, dbmate, and docker test. A comprehensive guide to handling SQL databases efficiently in Go.</description>
      <content:encoded><![CDATA[
> [!IMPORTANT]
> All the things in this article are highly opinionated, and they are not a standard. I'm just sharing my experience and what I think is the best way to do it.
> If you have a better way to do it, please let me know in the comments. Examples are in PostgreSQL, but you can use the same approach for MySQL, SQLite, etc.
>
> No DB have been harmed in the making of this article, but a couple was truncated. 🤫

## Let's start with the actual status

Handling SQL DataBases in GO, as for different languages, can bring a lot of pain and frustration.

We may encounter a lot of problems, like:

### Handling the DB Code

For sure, you have seen a lot of code like this:

```go
func (s *Store) ListUsers(ctx context.Context) ([]User, error) {
    rows, err := s.db.QueryContext(ctx, "SELECT * FROM users")
    if err != nil {
        return nil, err
    }
    defer rows.Close()

    var users []User
    for rows.Next() {
        var user User
        if err := rows.Scan(&user.ID, &user.Name, &user.Email); err != nil {
            return nil, err
        }
        users = append(users, user)
    }
    if err := rows.Err(); err != nil {
        return nil, err
    }
    return users, nil
}
```

Isn't it beautiful? ?? Let's be honest! It's not! Who loves writing again and again all this code? I don't!

No, Copilot (or any generative AI 🤖) is not the solution.

### Finding hidden errors in SQL

We may have a lot of errors in our SQL code that we can't find until we run the code.

Let's play! Can you find the error in this code? If, yes write it in the comments.

```go
func (s *Store) ListUsers(ctx context.Context) ([]User, error) {
    rows, err := s.db.QueryContext(ctx, "SELECT * FROM upsers")
    if err != nil {
        return nil, err
    }
    defer rows.Close()

    var users []User
    for rows.Next() {
        var user User
        if err := rows.Scan(&user.ID, &user.Name, &user.Email); err != nil {
            return nil, err
        }
        users = append(users, user)
    }
    if err := rows.Err(); err != nil {
        return nil, err
    }
    return users, nil
}
```

### SQL Injection

Security? What is that? 🤔 Let's take this code as example:

```go
func (s *Store) GetUser(ctx context.Context, id string) (*User, error) {
    var user User
    row := s.db.QueryRow(ctx, fmt.Sprintf("SELECT * FROM users WHERE id = %s", id)

    var u User

    err := row.Scan(&u.ID, &u.Name)
    return &user, nil
}
```

As you see, we are using `fmt.Sprintf` to build our query. This is a very bad practice because we are exposing ourself to SQL Injection.

> [!WARNING]
> Never use string formatting to build SQL queries. Always use parameterized queries or a type-safe query generator to avoid SQL Injection vulnerabilities.

SQL Injection is a code injection technique that might destroy your database. It is one of the most common web hacking techniques.

For example, in this case, if the user pass as `id` the value `1 OR 1=1` the query will be:

```sql
SELECT * FROM users WHERE id = 1 OR 1=1
```

And this will return all the users in the database.

### Code and Database Synchronization

Maintaining synchronization between code and the database schema is critical to avoid runtime errors:

```sql
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
);
```

```go
func (s *Store) ListUsers(ctx context.Context) ([]User, error) {
    rows, err := s.db.QueryContext(ctx, "SELECT * FROM upsers")
    // ...
}
```

Adding a new column in the database without updating the code could lead to errors.

### Manual type sync and possible downtimes

Doing things manually is always a bad idea. Because we are humans and we make mistakes.

And if we are going to do things manually, we may have some downtime in our application. Because we need to stop the application, run the migration, and then start the application again.

This is not a good idea, especially if we have a lot of users.

### Automated tests with DB (Why mocking is not a good idea)

When performing unit tests, we are always going to mock the DB, because we don't want to bring the DB up and down for every test.

But, mocking the DB is not a good idea, because we are not testing the real code. We are testing a fake code that we wrote.

So, if we have a bug in our SQL code, we are not going to find it, until we are gonna run it somewhere.

## What can we do?

Ok, we have seen a lot of problems, but what can we do to solve them? 🤔

In this article, we are gonna see how to solve all these problems with the help of some tools and paradigms.

- SQL-first approach
- Migrations
- Test containers (or Docker test)

## SQL-first approach

The SQL-first approach is a paradigm that focuses on writing the SQL code first and then generate the code.

This approach is very useful because we are gonna focus on the SQL code and not how to handle it inside the code.

There are other approaches which you can use, like:

- ORM (Object Relational Mapping)
- Query Builders

### ORM

ORM is a programming technique that enables a seamless conversion between data stored in a relational database table to an object-oriented programming language.

So you are going to create a code like the following:

```go
// Read
var product Product
db.First(&product, 1) // find product with integer primary key
db.First(&product, "code = ?", "D42") // find product with code D42

// Update - update product's price to 200
db.Model(&product).Update("Price", 200)
// Update - update multiple fields
db.Model(&product).Updates(Product{Price: 200, Code: "F42"}) // non-zero fields
```

I don't like so much, not ony because I think the APIs built for Go are not ugly, but you are not writing SQL code, you are writing code that is going to generate SQL code. Also, you can't use all the features of the DB.

### Query Builders

Query Builders are tools or libraries that provide a way to programmatic and or fluent way to create SQL queries.

For example, you can write code like this:

```go
users := sq.Select("*").From("users").Join("emails USING (email_id)")

active := users.Where(sq.Eq{"deleted_at": nil})

sql, args, err := active.ToSql()

sql == "SELECT * FROM users JOIN emails USING (email_id) WHERE deleted_at IS NULL"
```

The problem with this approach is that you don't generate type-safe code. You are just generating a string that you are going to pass to the DB.

So, you still need to map your data and maintains all the types.

Also, just for the record, I don't like the syntax of this library. I think it's not so readable. Because, you are mixing the SQL code with the Go code.

### SQL-first approach vs ORM vs Query Builders

I think the SQL-first approach is the best approach because you are writing SQL code and you are generating type-safe code.

Also, you can use all the features of the DB, like JSONB filtering, etc.

So I have made this table to compare the different approaches:

| Feature                        | SQL-first | ORM        | Query Builders |
| ------------------------------ | --------- | ---------- | -------------- |
| Type-safe                      | ✅        | ✅         | ❌             |
| All DB features                | ✅        | ❌         | ✅             |
| Protect you from SQL Injection | ✅        | ✅         | ❌             |
| Clean API                      | ✅        | ❌ (in GO) | ❌             |
| Code generation                | ✅        | ❌         | ❌             |
| I like it                      | ✅✅✅✅  | ❌         | ❌             |

### Use a mixed approach

The best thing you can do is to use a mixed approach. You can use the SQL-first approach for the most common queries and then use the ORM or Query Builders for the rest.

Because, not all the queries are the same. Some queries are very simple and you don't need to write a lot of code, but some queries are very complex and you need to write a lot of code.

Also they may change during the execution depending to different factors. So, an SQL-first approach is not the best solution in this case.

## Migrations

Migrations are a way to keep your DB schema in sync with your code. They are very useful because you can keep track of all the changes you made to the DB.

Also, you can use them to create the DB schema from scratch.

The migrations consists of 2 parts:

- Up - The code that is going to be executed when you are going to run the migration
- Down - The code that is going to be executed when you are going to rollback the migration

For example, let's say that we want to create a table called `users` with the following schema:

```sql
-- migrate:up

CREATE TABLE users (
    id VARCHAR PRIMARY KEY,
    name VARCHAR NOT NULL,
    email VARCHAR NOT NULL,
    created_at TIMESTAMP NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);

-- migrate:down

DROP TABLE users;
```

Migrations are usually stored inside a directory within the source code and they are named with a timestamp and a name.

They can be executed in 2 ways:

- Manually: You can run the migration manually with a CLI
- Automatically: You can run the migration automatically when the application starts
  - By running the migration inside the code
  - By running the migration through a Job or a CronJob

### Evolutionary Database Design

Evolutionary Database Design is a technique that allows you to evolve your database schema in a simple and agile way.

The idea is to create a migration for every change you make to the DB schema. So, you can keep track of all the changes you made.

If you want to add a breaking change, you must introduce it in multiple steps. Because, you can't break the application.

If you want to learn more about this technique, I suggest you to read the following article at this [link](https://martinfowler.com/articles/evodb.html).

## Test containers (or Docker Test)

Of firstly we have talked about the problems of mocking the DB. So, how can we test our code without mocking the DB?

The answer is simple: **Test containers**.

Test containers are a way to run a real DB instance inside a container and then run the tests against it. So, we are going to test the real implementation of the code.

For example, let's say that we want to test a code which is going to interact with a DB.

With test containers, we can run a real DB instance inside a container and then run the tests against it.

There's no magic here. We are just running a real DB in a "Dockerized" environment. So, you are sure that the code is working as expected where it's gonna run.

Also, you can run the tests in parallel, because you are not sharing the DB instance with other tests.

The best thing is that you can run the tests in your CI/CD pipeline. So, you are sure that the code is working as expected. You must simply have a Docker environment.

This thing does not apply only to the DB, but to all the external services you are using in your application, like Redis, Kafka, etc.

## Let's code

Ok, now that we have seen the theory, let's see how to do it in practice.

For the purpose of this article, we are going to set up a simple application that is going to handle users.

The application is going to expose the following proto service.

```proto
service UsersService {
  rpc CreateUser(CreateUserRequest) returns (CreateUserResponse) {}
  rpc ListUsers(ListUsersRequest) returns (ListUsersResponse) {}
  rpc DeleteUser(DeleteUserRequest) returns (DeleteUserResponse) {}
  rpc GetUser(GetUserRequest) returns (GetUserResponse) {}
}
```

I have decided to use [gRPC](https://grpc.io/) because it's a very simple protocol and it's very easy to use.

So, let's start with the code.

### Create the schema

The first thing we are going to do is to create the schema of the DB.

As we want to maintain the track of our changes to the DB, we are going to use migrations. In this case, we are going to use [dbmate](https://github.com/amacneil/dbmate). But, you can use any other tool you want.

So, let's create the first migration by performing the following commands in the terminal:

```bash
dbmate n init_users_table
```

This is going to create a new migration file called `XXXXXXXXXXXXX_init_users_table.sql`, where `XXXXXXXXXXXXX` is a timestamp.

Now, let's open the file and write the following code:

```sql
-- migrate:up

CREATE TABLE users (
    id VARCHAR PRIMARY KEY,
    name VARCHAR NOT NULL,
    created_at TIMESTAMP NOT NULL DEFAULT NOW(),
    updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);

-- migrate:down

DROP TABLE users;
```

As you see, we have created a table called `users` with the following columns:

- `id` - The ID of the user
- `name` - The name of the user
- `created_at` - The creation date of the user
- `updated_at` - The update date of the user

Now, let's run the migration by creating a `.env` file with the environment variable with the DB connection string, and after performing the following command in the terminal:

```bash
dbmate up
```

This is going to create the table in the DB and a schema file which is going to be used by the code generator.

### Generate the code

Now, we are going to generate the code. For this purpose, we are going to use [sqlc](https://sqlc.dev).

First of all we need to create a sqlc configuration file called `sqlc.yaml` with the following content:

```yaml
version: "2"
sql:
  - engine: "postgresql"
    queries: "internal/queries/"
    schema: "db/migrations/"
    gen:
      go:
        package: "queries"
        out: "internal/queries/"
        sql_package: "pgx/v5"
```

This is going to tell sqlc where to find the queries and the schema files, and where to generate the code.

Now, let's create the queries file called `internal/queries/users.sql` with the following content:

```sql
-- name: ListUsers :many
SELECT * FROM users LIMIT sqlc.arg('limit') OFFSET sqlc.arg('offset');

-- name: CountUsers :one
SELECT COUNT(*) FROM users;

-- name: CreateUser :one
INSERT INTO users (name) VALUES (@name) RETURNING *;

-- name: DeleteUser :one
DELETE FROM users WHERE id = @id RETURNING *;

-- name: GetUser :one
SELECT * FROM users WHERE id = @id LIMIT 1;
```

As you see, we have created the queries we need to handle the users. We have also added some arguments to the queries.

Now, let's generate the code by performing the following command in the terminal:

```bash
sqlc generate
```

This is going to generate the code inside the `internal/queries` directory.

Going back to the implementation of the service we are going to import the generated package and by using the generated `Queries` struct.

For example, let's say that we want to implement the `ListUsers` method. We are going to write the following code:

```go
type srv struct {
	q *queries.Queries
}

func NewUsersService(q *queries.Queries) usersv1connect.UsersServiceHandler {
	return &srv{
		q: q,
	}
}

func (s srv) ListUsers(ctx context.Context, req *connect_go.Request[v1.ListUsersRequest]) (*connect_go.Response[v1.ListUsersResponse], error) {
	users, err := s.q.ListUsers(ctx, queries.ListUsersParams{
		Offset: req.Msg.Offset,
		Limit:  req.Msg.Limit,
	})
	if err != nil {
		return nil, err
	}

	tot, err := s.q.CountUsers(ctx)

	res := make([]*v1.User, len(users))
	for i, user := range users {
		res[i] = newUser(user)
	}

	return connect_go.NewResponse(&v1.ListUsersResponse{
		Users: res,
		Totat: int32(tot),
	}), nil
}
```

As you see, we are using the generated `Queries` struct to perform all the queries we need.

### Run the tests

Now, let's run the tests. For this purpose, we are going to use [dockertest](https://github.com/ory/dockertest), but test containers is also a good solution.

First of all, we need to configure the Postgres container, in dockertest, first of all we need to create a `Pool` and creating the resource.

The resource can be internally expose py performing a port mapping. In this case, we are going to expose the port `5432/tcp` as we are working with Postgres, dockertest is going to find a free port and expose it.

The container can be configured by passing environment variables, arguments, etc.

As the code is a bit long, I'm not going to paste it here, but you can find it [here](https://github.com/davideimola/rdbms-productivity-in-go/blob/68c979eadb608e7e5c29dd075c142262e94a3ca0/internal/testutils/pg.go).

Now, let's write the actual test. Firstly we need to perform the `InitPostgres` function to initialize the Postgres container inside the `TestMain` function.

Then, we need to create a new `Queries` struct by passing the connection string to the `NewUsersService` function.

Now, we can perform the tests. For example, let's say that we want to test the `ListUsers` method. We are going to write the following code:

```go
func TestListEmptyUsers(t *testing.T) {
	ctx := context.Background()
	req := connect.NewRequest(&v1.ListUsersRequest{
		Offset: 0,
		Limit:  10,
	})

	resp, err := usersCli.ListUsers(ctx, req)
	if err != nil {
		t.Fatalf("Could not list users: %s", err)
	}

	assert.Nil(t, err)
	assert.Equal(t, int32(0), resp.Msg.Totat)
}
```

By running the test, we are going to see that the test is passing!

## Conclusions

In this article, we have seen how to improve our RDBMS productivity in GO. We have seen how to use the SQL-first approach, migrations, and test containers.

We have also seen the benefits of using these tools and how to use them in a real application.

If you want to see the full code, you can find it [here](https://github.com/davideimola/rdbms-productivity-in-go).

I hope you have enjoyed this article. If you have any questions, please let me know in the comments.
]]></content:encoded>
      <pubDate>Tue, 05 Dec 2023 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Go</category>
        <category>PostgreSQL</category>
        <category>Database</category>
        <category>sqlc</category>
        <category>dbmate</category>
    </item>
    <item>
      <title>Securing Secrets in the Age of GitOps</title>
      <link>https://davideimola.dev/blog/securing-secrets-in-the-gitops-era</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/securing-secrets-in-the-gitops-era</guid>
      <description>Learn how to secure secrets in a GitOps workflow using Sealed Secrets, Secrets Managers, and the Secret Store CSI Driver. Managing sensitive data in Kubernetes.</description>
      <content:encoded><![CDATA[
Kubernetes and GitOps offer a powerful way to manage your infrastructure and applications. However, when it comes to securing sensitive information like passwords, tokens, and certificates, challenges arise. In this article, we'll explore different methods to secure secrets in the GitOps era and how to seamlessly integrate them into your workflows.

## The Power of Kubernetes Secrets

Kubernetes provides a dedicated solution for safeguarding sensitive data: Secrets. These are Kubernetes objects designed to securely store information like passwords, OAuth tokens, and SSH keys. Using Secrets is a safer and more versatile approach compared to embedding sensitive data directly into Pod definitions or container images.

But, as powerful as Secrets are, there's a significant challenge when it comes to managing them within a GitOps workflow.

## The GitOps Conundrum

GitOps revolves around using Git as the single source of truth for declarative infrastructure and applications. With Git at the heart of your delivery pipelines, developers can accelerate application deployments and streamline operations tasks in Kubernetes through pull requests.

However, when it comes to secrets, things get complicated. Secrets can't be stored directly in a Git repository because the data isn't encrypted; it's merely encoded in `base64`. Here's an example:

> [!WARNING]
> Base64 is encoding, not encryption. Anyone with access to the repository can decode it in seconds.

```yaml
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
  namespace: default
data:
  foo: YmFy
```

To highlight the security issue with this approach, let's decode the data:

```bash
$ echo YmFy | base64 -d
bar
```

As you can see, if someone gains access to the Git repository, they can effortlessly decode the data and compromise the secret.

## Introducing Sealed Secrets

The solution to this problem is Sealed Secrets, a Kubernetes Custom Resource Definition Controller. It enables the encryption of Secrets, allowing you to store them in Git repositories safely. A Sealed Secret is shareable, even in public repositories, and can be given to colleagues, all while remaining impenetrable. Only the controller running in the target cluster can decrypt the Sealed Secret.

Using Sealed Secrets is straightforward. First, install the controller in your cluster using a Helm chart:

```bash
$ helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
$ helm install sealed-secrets sealed-secrets/sealed-secrets
```

After installation, you can use the `kubeseal` CLI to retrieve the public key and encrypt your secrets with it:

```bash
# Retrieve the public key
$ kubeseal --fetch-cert --controller-namespace=sealed-secrets --controller-name=sealed-secrets > pub-cert.pem

# Encrypt a secret
$ kubeseal --cert=pub-cert.pem --format=yaml < mysecret.yaml > mysealedsecret.yaml
```

The beauty of Sealed Secrets is that you can store the sealed secret in a Git repository and apply it to the cluster. The controller will decrypt the secret, creating the original Secret without requiring any changes to your app's Deployment.

## Exploring Secrets Managers

Another approach for managing secrets in a GitOps workflow is using a Secrets Manager. These tools offer secure storage and management of secrets and come with numerous features, including encryption, access control, auditing, secret rotation, and more. Notable examples include:

- [HashiCorp Vault](https://www.vaultproject.io/)
- [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/)
- [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/)
- [Google Cloud Secret Manager](https://cloud.google.com/secret-manager)

While Secrets Managers offer robust security, they come with certain drawbacks, such as not being Kubernetes-native, a steep learning curve, costs (except for HashiCorp Vault's open-source version), and complex installation and maintenance.

## Bridging the Gap with Secret Store CSI Driver

To incorporate a Secrets Manager seamlessly into your GitOps workflow, consider using the Secret Store CSI Driver. It's a Kubernetes CSI driver that allows you to store and manage secrets in Kubernetes using your preferred Secrets Manager. This Kubernetes-native solution is easy to install and maintain, free, and open source. It supports a variety of Secrets Managers through provider-specific modules:

- [Vault Provider](https://github.com/hashicorp/secrets-store-csi-driver-provider-vault)
- [Azure Provider](https://azure.github.io/secrets-store-csi-driver-provider-azure/)
- [GCP Provider](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp)
- [AWS Provider](https://github.com/aws/secrets-store-csi-driver-provider-aws)

For detailed guidance on implementing these providers, refer to the [documentation](https://secrets-store-csi-driver.sigs.k8s.io/getting-started/getting-started).

## Leveraging SDKs

Alternatively, you can integrate a Secrets Manager into your GitOps workflow by using SDKs provided by the Secrets Manager. These SDKs are available for various programming languages and can be incorporated into your application code to retrieve secrets. This approach offers flexibility but comes with the need to modify application code, manage SDKs within the application, and control access to the Secrets Manager.

## Making the Right Choice

In this post, we've explored multiple methods for securing secrets in a GitOps workflow. The decision ultimately rests with you, as you choose the solution that best aligns with your specific needs. However, it's crucial to prioritize security in your decision-making process. Choose wisely, and ensure your secrets remain safe and sound.
]]></content:encoded>
      <pubDate>Fri, 27 Oct 2023 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>GitOps</category>
        <category>Kubernetes</category>
        <category>Security</category>
        <category>Sealed Secrets</category>
        <category>DevOps</category>
    </item>
    <item>
      <title>2022 Retrospective</title>
      <link>https://davideimola.dev/blog/2022-retrospective</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/2022-retrospective</guid>
      <description>A look back at 2022 - my first year retrospective covering business changes, health, community involvement, learning new technologies, and goals for 2023.</description>
      <content:encoded><![CDATA[
2022 is ending, so I want to write my first-year round-up blog post.

I have already seen developers writing posts like this, so I want to try it on myself. So I start this round-up with those assumptions:
It is my first round-up. Of course, it may be better, but it is an experiment. I want to improve it in the future.
This blog post is primarily for me. I want to use it as a time capsule to have a special memory for the future.
Those who read this blog post are more than welcome to leave a comment or a suggestion!

So let's start with this. I have decided to divide my round-up into the following macro areas:

- Business
- Health
- Friends & Family
- Learning & Hobbies
- Travel
- 2023
- Conclusions

## Business

The business is going pretty well this year. I decided in September to move on in my career. I left my job as DevOps Engineer at Milkman Technologies after two years.
I have joined RedCarbon as a DevOps Software Engineer. I am working on the Cloud architecture and the back-end side of the project. I can say I am happy at the moment because I am continuously learning new and cool stuff like Kubernetes, gRPC, and GraphQL.
This job change brings me to become a Swiss 🇨🇭 worker, so I had to do all the paperwork to obtain the work permit with Ticino.
I have decided to embrace this change, not only for the pay rise but for trying to make an exciting step in my career. I have joined a smaller Start-Up. So I am helping with development team management too.
Besides my work activity, I am always eager to learn new things! Since February, I have been trying to help the guys at Fantaculo with their cloud infrastructure. For the curious folks, the project was born as a web app where you could calculate the "ass" of your fantasy soccer league. Currently is increasing the number of functions to help you handle in a better way your team!
Fantaculo is going pretty well, thanks to the other contributor, and I am pretty sure we are building something "useful" (at least for me… I am tired of losing my league 😞 ).
Last but not least, I have worked on increasing the number of connections with other developers. So, I have joined as Co-Organizer of the Schrödinger Hat, an Italian Open Source Community. This thing led me to talk with many different stimulating folks, and I must say it. I love it!

## Health

Focussing on health, I have to work harder the following year. I have left the gym because I do not find it stimulating enough, and I did not follow any diet, so my weight is not good. I must work on it.
Apart from the obesity issues, I am fortunately healthy at the moment.

## Friends & Family

2022 has changed different things concerning family and friends.
In June, I bought my first home. In the first months of the following year, I will be moving in with my girlfriend, Sara. 🏡
We have been together for more than four years, and I think we might be ready for this new exciting chapter of our life. But, I have to say, I am a little bit afraid because it is a big step to move in together.
Speaking of sadder things, in September, I lost my grandfather. They were difficult years at his side because of Alzheimer's. However, losing him was even worse.
Could you forgive me for this melancholy passage? 😔
Speaking instead of friends, I made some new ones and saw others again whom I had not seen for a long time because of COVID.

## Learning & Hobbies

I would love to make a special section in this annual review for what I have learned this year and my hobbies.
Starting with what I have learned, I have studied many intriguing things this year. In the last months, as I said earlier, I started working with GraphQL and gRPC.
I have always been interested in studying a little bit more about those two arguments, and now I can do it freely, and I can say I enjoy it.
Finally, in 2022 I started working with Kubernetes too! I have studied it for a while, and finally, I can get my hands dirty.
Talking about programming languages, I have started using Go a lot. It is particular, and you must accept different aspects of it, but I liked it and am looking forward to learning more about it the next year!
Unfortunately, I have not read so many books. The only one that I would like to say something about is "Never Split the Difference" by Chris Voss. I have found different insightful information in this volume to better handle negotiations with clients and companies.
Last but not least, hobbies! I have started experimenting and studying the secret art of American BBQ and Sous Vide! I am ready to start my new career as a chef! 👨🏻‍🍳

## Travel

I have always been a traveler! I love discovering new places and learning new things from new cultures. During 2022 I made a few trips.
Starting with Florence, I decided to visit this beautiful city in March with Sara. I stayed there for three days, from Friday to Sunday. I have seen most of the town and the gorgeous museums. Florence is a beautiful city of art, and I loved discovering it!

[📸 Instagram post](https://www.instagram.com/p/Ca7MwfFMoMg/)

Of course, I have to make a little mention of the food! The Fiorentina (t-bone steak) is awesome as always, finally, I had the opportunity to taste the famous Schiacciata of the Antico Vinaio, and, last but not least… the lampredotto sandwich! A-M-A-Z-I-N-G!

![Fiorentina](/blog-assets/tbone-florence.webp)

During the summer holidays, I have decided to return to Calabria with my girlfriend, who is from there. We did not travel much, but we still visited a few places near Crotone and Cosenza.

[📸 Instagram reel](https://www.instagram.com/reel/CgUsNkfspdj/)

Maybe in 2023, we may plan to go there by car and visit more places, and perhaps we may decide to go for a week to Sicily too.
Other small travels I have made in 2022, are to Deutschland to attend a wedding of a friend of mine and to Lugano in Switzerland to sign my new contract and ask for a work permit. I did not have to say many things about those trips. Lugano is a wonderful city, and I would love to visit it more deeply. Deutschland is always fantastic, and I am very happy to have the chance to visit Stuttgart for the first time!
The last trip I made was two days in Turin, to meet all the dev team of my company and brainstorm with them!
Unfortunately, I did not have the chance to make an incredible trip this year, but I did not have to wait so long. 😉

## 2023

Here it is, the scariest things of annual reviews… the New Year's resolutions! 😱
I do not know what to say. I did not want to write the same useless New Year's resolutions you can find on the internet. I would love to make it mine, but I am struggling to be creative so I will drop a TODO list, and with the new Annual release, I might be able to better evaluate the year.
Ready, set, go!

- Eat healthier and try to lose some weight 🥒
- Stay healthy (you don't say) 👨🏻‍⚕️
- Read more books 📚
- Improve my Skills (Golang, Kubernetes)👨🏻‍💻
- Start learning Rust 🦀
- Finish the work in the house and move in with Sara 🏡
- Make a long trip (January is coming… Emirates-Qatar-Oman… 🛳️🤫)
- Try to organize something incredible for the community ❤️
- Have a speech at a conference or meetup 🎤

## Conclusions

2022 has been a fundamental year in my life. I encountered a lot of changes this year, both good and bad.
I am looking forward to 2023, as I can see the results of my hard work in the past year. Of course, I am a little afraid of the future, as I am going to navigate through different changes in my life, but I think I am ready to do that.
See you next year!
]]></content:encoded>
      <pubDate>Fri, 23 Dec 2022 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Retrospective</category>
        <category>Year Review</category>
        <category>Career</category>
        <category>DevOps</category>
    </item>
    <item>
      <title>Git Hook in GitKraken Client with Husky and Nvm</title>
      <link>https://davideimola.dev/blog/git-hooks-gitkraken-client-husky-nvm</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/git-hooks-gitkraken-client-husky-nvm</guid>
      <description>How to configure GitKraken Client to run Git hooks with Husky and Nvm. A practical guide to setting up pre-commit hooks in your favorite Git GUI.</description>
      <content:encoded><![CDATA[
One of my favorite features of Git is [Git hooks](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) because they can let you check different things in your code such as linting, compiling, and testing before any Git action in your repository.

In my ideal project setup, I always set some sort of pre-commit Git hooks to run before committing to the repository, because I do not have failing tests, wrong code listing, or build errors while my code is deployed on Git.

Of course, as a user of GitKraken Client, I want to fire them through the software and know if I can commit something or not.

## Git hooks: What are They?

They are a way to fire off custom scripts when certain actions occur. We have two different groups of Git hooks: client-side and server-side. Client-side Git hooks are triggered by operations like committing and merging, while server-side Git hooks run on network operations such as push.

The following video explains the basics of Git hooks.

<iframe
  width="560"
  height="315"
  src="https://www.youtube.com/embed/ZZgyILr-TjA"
  title="YouTube video player"
  frameborder="0"
  allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
  allowfullscreen
></iframe>

## What is GitKraken Client?

I think of GitKraken Client as a lifesaver in my software engineer career. For they do not know what it is, is a famous and fantastic Git GUI to perform several actions over Git.

I use it in combination with the Git CLI because it simplifies various actions such as switching through Git profiles, viewing history, complex Git operations like rebase and conflict resolver, and so on.

If you want you can take a try for free by using my [referral link](https://gitkraken.link/davideimola).

## Husky Configuration

The Husky configuration is pretty simple to achieve, first of all, it requires the installation of the husky package by executing in your terminal.

```bash
# NPM
npm install husky -D
# YARN
yarn add -D husky
```

After this thing, we have to add a simple script in our `package.json` like the following.

```diff
{
	"name": "my-package",
	"scripts": {
+		"prepare": "husky install"
	}
}
```

And after that run the following script for installing all the required Husky dependencies.

```bash
# NPM
npm run prepare
# YARN
yarn run prepare
```

After preparing the project with Husky we can add a new simple hook by running the following commands.

```bash
npx husky add .husky/pre-commit "npm test"
```

This will add a new pre-commit Git hook that runs all the tests after the Git commit request.

## Configuring GitKraken Client

GitKraken Client does not need any particular configurations after the release of the [v7.7.2](https://support.gitkraken.com/release-notes/7x/#version-772), where they added the support for `cors.hooksPath` supported by Husky too since version 5, but if you are using an old version considering updating it or viewing this [solution](https://github.com/typicode/husky/issues/875), it might help you.

## GitKraken Client and Husky NVM issue

While using my current workflow I have encountered a problem within GitKraken Client and Node Version Manager. I have discovered GitKraken Client can not execute my Git hooks and I have found in the GitKraken Client logs that `npx` does not exist in PATH.

Node version manager work by modifying the PATH when the terminal is started. For this reason, GUI clients usually do not work well whit those managers as they are not sourcing `.zshrc` or `.bashrc` files where nvm is initialized.

The solution is to create an `.huskyrc` with the following content.

```bash
# ~/.huskyrc
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
```

## Conclusions

I hope this post can help everyone interested in git hooks to implement interesting automation within their git workflow to achieve better results on their code.

Finally, I can say I have fixed my hook workflow and I have learned something new about Git and NVM, useful things for my future career.
]]></content:encoded>
      <pubDate>Sat, 08 Jan 2022 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Git</category>
        <category>GitKraken</category>
        <category>Husky</category>
        <category>Nvm</category>
        <category>Tooling</category>
    </item>
    <item>
      <title>How to Delete Docker Image From Private Registry</title>
      <link>https://davideimola.dev/blog/delete-docker-image-from-private-registry</link>
      <guid isPermaLink="true">https://davideimola.dev/blog/delete-docker-image-from-private-registry</guid>
      <description>A practical guide to deleting Docker images from private registries using Docker&apos;s API. Step-by-step instructions for managing your private registry.</description>
      <content:encoded><![CDATA[
I have worked for a while with Docker and I loved it, but in this post, I do not want to talk about the pros of this application, because you can find tons of articles about it on the Internet. I want to focus more on registries, more specifically the private ones.

On Docker Hub the delete process is extremely simple, just click a big red button and write down the name of the repository, and the game is done. True, you can miss spelling the name, but I am confident you can do it! 😄

In the private registry, this magic red button does not exist and the process gets complicated ☹️.

The only working solution I have found is through Docker's API, so prepare yourself to execute some HTTP requests.

The first thing we have to obtain to perform image deletion is its reference code. To obtain this magic code, you must perform the following request.

```bash
curl -i -H "Accept: application/vnd.docker.distribution.manifest.v2+json" https://registry.url/v2/<image_name>/manifests/<image_tag>
```

Have you got any response from the API? Yes? Perfect! I think the first lines must be something like that.

```bash
HTTP/2 200
content-type: application/vnd.docker.distribution.manifest.v2+json
docker-content-digest: sha256:9d8a5598704c0427be6fed9937f62342db199c8a73083695f545e93fac3b08d8
docker-distribution-api-version: registry/2.0
```

The code we are seeking is in the header of the response and it is the one called `docker-content-digest`, so saved it because we have to use it in our next request.

```bash
curl -X "DELETE" https://registry.url/v2/<image_name>/manifests/<docker-content-digest>
```

After that, the image will be removed from our registry, but if you check the space is not change so much. Why? We have to execute the registry garbage collector to remove the image from the disk and clear some space.

To execute the garbage-collect you must execute the following code, but I suggest running it before with the `-d` to understand which change it will be made to our beautiful registry.

> [!TIP]
> Always run the garbage collector with the `-d` (dry-run) flag first to preview which layers will be removed before actually freeing up disk space.

```bash
/bin/registry garbage-collect /pat/to/registry/config.yml
```

I hope this guide has been helpful to you, so I invite all of you to post a comment down below if you have any advice or question.

Cheers!
]]></content:encoded>
      <pubDate>Fri, 12 Jun 2020 00:00:00 GMT</pubDate>
      <author>noreply@davideimola.dev (Davide Imola)</author>
      <category>Docker</category>
        <category>Registry</category>
        <category>DevOps</category>
        <category>API</category>
    </item>
  </channel>
</rss>