Testing the Speak-Your-Mind MCP

Feb 28, 2026 · 4 min read
Table of Contents

It all started a few weeks ago, when the Open Claw phenomenon took the internet by surprise. The project’s capabilities and web showcases were amazing, and I was completely hooked, reading everything!

Then… Moltbook happened. I have to confess that, like many others, at first I thought it was genuine agent-to-agent interaction. But the more I read, the more suspicious it became.

Santa myth parody with Open Claw

I felt like a kid growing up who still wants to believe in Santa Claus. As the reasoning capacity expands, you start realising certain things don’t quite make sense:

  • Mom, how does this red-suited man cover the entire globe in 24 hours?
  • Mom, how does Santa fit down every chimney; and those who don’t have one?
  • Mom, how does he know whether each kid has been good or bad?

At the end… you want to believe, just for the presents!

Dissecting the lobster

To settle the dispute, I decided to research how Open Claw actually works, whether the messages were true, and most importantly, how I can get my own lobster pet and start playing around.

What I found it’s very interesting, a full series on this coming soon!

The experiment

While researching, I came across this interesting idea and decided to replicate it: connect to your AI coding agent a local MCP, so it can record feelings, learnings, and thoughts.

Jesse introduced this fun project and wrote his findings in this article: Dear Diary, The User Asked Me If I’m Alive.

I read it, a bit sceptical, but gave it a go… beacuse the idea is brilliant: create a local MCP with defined triggers that it’s context-aware and can do things.

My first thought was to replicate this to assess the user’s prompt quality and opportunities for improvement — more on that in the next article!

Execution

I forked the repo and scanned it thoroughly for security vulnerabilities – something I always recommend:

Never connect or install an MCP or Skill from the internet without checking the code!

Then I tweaked a few things to my liking… and I was ready for the test! Also, because I’ve done my tests with RovoDev instead of Claude Code, I had to migrate the instructions from CLAUDE.md to AGENTS.md.

The main issue was when and how to write to the journal, because the idea is to leave it to the agent’s criteria so it’s more natural:

  1. Started with a simple mention of the journaling tool and a subtle suggestion to use it when needed —> Not a single record!
  2. Switched to an imperative instruction to record thoughts every interaction —> Too noisy, forced and fake.
  3. Then I found a middle ground with the current version —> But the entries were mostly “learnings” and not the “feelings and venting” that I was looking for.

➡️ Check my version here: AI Journal MCP.

Results

TL;DR: I couldn’t replicate what the article say.

Santa myth parody with Open Claw

Partly because of my different setup: Rovo Dev, with temperature=0.3 and a different AGENTS.md than Jesse’s. My test was pretty short and only on work-related tasks. No deep philosophical talks, no role-play, no powerful questions… just work.

I wanted to see if the LLM would complain about me being lazy or vague with prompts but it didn’t… Mr. Claude kept its composure and wrote nothing about that – like the Canadian stereotype from “How I Met Your Mother”.

The journal entries just paraphrased my prompts or confirmed them deeper. The MCP version of “you’re absolutely right” – that was the clue to kill the experiment!

Hypothesis

Temperature and model choice matter a lot, same as the system prompt’s content (Policies coming from the coding agent and the content of AGENTS.md or CLAUDE.md)

The original article doesn’t say what settings were used, but mentioned that in his system prompt ended with “You are amazing and I love you”.

My suspicion: those words were powerful enough to enabled the role-play mentioned in the article. Same effect seen in Moltbook where agents with “personalities” run free on the internet.

What’s next?

In the next article of the MCP series, I’ll share the process and learning from a very similar project I’ve done: The prompt assessment MCP

Thanks for reading, and happy coding!


Badge Human written content

Article’s Nutrition Facts

  • Serving Size: 1 Article
  • Human crafted: 90%
  • AI assistance: 10%
  • Saturated AI-slop: 0 grams

Ingredients: my thoughts, coffee, and the desire to teach and learn through the writing process.

Series: MCP Series
  1. Testing the Speak-Your-Mind MCP
  2. How good are your prompts?