← Home About Now Photos Blogroll Archive Tweets Also on Micro.blog
  • Can you have a real conversation with your notes?

    Niklas Luhmann, the prolific sociologist behind the Zettelkasten method, often described his note-taking practice as having a conversation with his system. He would write a note, and the system would “respond” by surfacing related ideas, unexpected connections, unexpected tensions.

    It was a metaphor. A beautiful one, but still a metaphor.

    But what if it didn’t have to be?

    At the PKM Summit in Utrecht last week, I proposed a breakout session around exactly this question: what would it mean to have literal voice conversations with your personal knowledge management system? Not just speaking to transcribe, but speaking to think, and having your notes respond.

    I’ll be honest: I went in with more questions than answers. And I came out with even more. But the discussion surfaced a framework that I find genuinely useful.

    The four levels of voice interaction

    The participants in our session quickly recognized that “voice with PKM” means very different things depending on how sophisticated the interaction is. We mapped it into four levels.

    Level 1: transcription

    This is where most people start. Tools like Wispr Flow, Onit, or Handy let you speak and get text back. It’s a fast, friction-free way to capture thoughts, especially useful when your hands are busy or you think faster than you type.

    Useful, certainly. But this is essentially just a faster keyboard. Your PKM isn’t doing anything with your ideas yet.

    Level 2: AI post-processing

    Here’s where it gets interesting. You speak your thoughts, including instructions, and AI restructures them into something more useful than raw transcription.

    This is what I do myself. I’ll verbally draft an idea in Tana, weaving in instructions like “turn this into bullet points” or “reorganize this around the main argument.” The result isn’t just captured speech; it’s been processed into a form my PKM can actually work with. Non-linear thinking becomes structured notes.

    The key difference from level 1: you’re not just inputting; you’re collaborating.

    Level 3: interactive voice dialogue

    At this level, the AI doesn’t just process your input, it engages. It asks clarifying questions. It pushes back. It says: “You mentioned three different things here; which one is the core point?”

    This is where voice starts to feel like the conversations Luhmann was describing, and where I’m currently experimenting myself. Using the voice chat feature in Tana, I’ve found that my thoughts become sharper not because I transcribed them, but because something challenged them. It’s slower than level 2, but noticeably deeper.

    Level 4: the Jarvis mode

    This is the aspirational level. Imagine being able to ask your PKM: “What did I decide about this project in January?” or “What sources have I read on habit formation?” and getting a coherent spoken response drawn from your own notes.

    Full integration. Your PKM as a genuine thinking partner you can speak with, not just write to.

    We’re not there yet. But the direction is clear.

    The tension I can’t resolve

    Here’s what I keep coming back to.

    A few years ago, I wrote a post arguing that writing is the best medium for deep thinking. The act of writing forces you to formulate ideas clearly, to make vague thoughts concrete, to notice where your logic breaks down. The friction is the feature.

    Voice removes that friction.

    So is voice input actually good for thinking, or just good for capturing? Is it a shortcut that helps you move faster, or a shortcut that lets you skip the hard part?

    I don’t have a clean answer. My instinct is that levels 1 and 2 are primarily about capture efficiency: getting ideas into the system faster and with less friction. Levels 3 and 4 are where real thinking might happen, but only if the AI engagement is substantive enough to replicate what writing does naturally.

    The question is whether today’s tools get there.

    Where are you?

    Most people working with PKM systems are operating at level 1. A smaller group has reached level 2. Levels 3 and 4 are still being figured out.

    What I took away from the PKM Summit is that the ceiling is much higher than the current tools suggest. And that the interesting work isn’t just building better transcription. It’s figuring out how spoken dialogue can become a genuine mode of thinking, not just a faster mode of typing.

    Luhmann’s metaphor might become literal sooner than we think.

    At which level are you using voice in your PKM? And do you think voice can ever match writing as a medium for deep thought?

    → 12:55 PM, Mar 30
  • Nicholas Bate (@HG21C) on Eufriction:

    Don’t aim to remove all friction from your day. Remove unhelpful friction such as bureaucracy, ineffective tools, and unclear messages. Keep the friction that brings value such as pausing before you send or the effort that leads to quality and better thinking.
    Nicholas Bate https://huntergatherer21c.com/

    Working slower promotes deeper thinking processes and better results

    → 1:27 PM, Mar 24
  • 🛫EIN – BCN 🛬 I’ve had a blast at the PKM Summit! Still wrapping my head around everything I saw and heard.

    → 6:23 PM, Mar 22
  • 🛫BCN – AMS 🛬 Ready for the PKM Summit!

    → 6:23 PM, Mar 19
  • Two of my productivity tips made it into the March issue of Forbes España: on beating procrastination and taming your inbox.

    A person wearing glasses and a red sweater is sitting atop a clock, surrounded by a text discussing how to overcome procrastination.

    → 1:41 PM, Mar 18
  • So true:

    if you need an alarm to wake up, you are sleep deprived.
    Nicholas Bate https://huntergatherer21c.com/2026/03/11/sleep-what-is-the-cheapest.html

    (This is a note to myself)

    → 4:27 PM, Mar 11
  • Multitasking leads to an overestimation of your own ability to perform multiple tasks simultaneously

    Multitasking is a common phenomenon in our modern society, but research shows that we are much worse at it than we think. The problem lies not only in the fact that we struggle to perform multiple tasks simultaneously, but especially in our inaccurate assessment of our own capabilities. Studies show that people who frequently multitask wrongly believe they are doing so effectively. They significantly overestimate their abilities, while the reality is different. This lack of self-awareness is worrying: we are not only bad at multitasking, but we also seem unable to see or acknowledge it.

    The consequences of frequent multitasking go beyond just a distorted self-perception. Research on chronic media multitaskers, people who regularly process multiple media streams simultaneously, shows striking results. These heavy multitaskers are actually more sensitive to environmental distractions and irrelevant information in their memory. Paradoxically, they even perform worse on tests measuring the ability to switch between tasks, likely because they are less capable of filtering out distracting information. This suggests that frequent multitasking is associated with a fundamentally different way of processing information.

    The comparison to a drunk person who thinks they are walking in a straight line captures the problem well: we think we are better than we actually are. Thus, multitasking creates a dangerous combination of reduced performance and inflated self-confidence. As multitasking plays an increasingly larger role in our daily lives, it becomes all the more important to be aware of these limitations and the gap between our perception and reality.

    Related

    • The human mind and brain lack the architecture to perform multiple tasks simultaneously
    • Attempting to do two or more attention-demanding tasks simultaneously reduces productivity
    • Whether you think you can or think you can’t, you’re right
    • Avoid excessive task switching to increase efficiency
    → 10:26 AM, Feb 25
  • The artisan and the tinkerer

    Last week I read about custom vibe-coded RSS readers: quick and scrappy tools, non-polished, built to solve a particular need of their owner. Today I read about Current, which has thoughtful design, interesting concepts, and attention to detail.

    And then I noticed something curious.

    Two divergent paths

    On one hand, there’s this artisan tendency. People who dedicate time to building personal tools with the same mindset a cabinetmaker uses to make a table: every decision matters, every detail counts. The result is beautiful, thoughtful, unique software.

    On the other hand, there’s the explosion of tools made with the help of artificial intelligence. Quick solutions, imperfect, often somewhat rough. But deeply personal. Software that does exactly what you need, even if it won’t win any design awards.

    Two completely different approaches. And both are happening at the same time.

    They’re not opposites

    What’s fascinating isn’t that both exist, but that they coexist without conflict. They’re not opposing approaches; they’re different answers to the same question: what should my tools look like?

    The artisan values the process as much as the result. Technical perfection, elegance, the pleasure of doing something well done. It’s software as personal expression, as craft.

    The person using artificial intelligence values speed and perfect adaptation to their specific case. They prefer something imperfect that works today over something perfect that requires months. It’s software as a practical extension of oneself.

    Neither approach is better. They simply prioritize different things: craft versus immediacy, polish versus personalization, beauty versus direct utility.

    And you?

    Here’s the question: where are you on this spectrum?

    When you think about the tools you use every day, the software that surrounds you, what do you value most? Do you care that something is well made even if it’s not exactly what you need? Or do you prefer it to fit like a glove to your way of working, even if it has rough edges?

    There’s no right answer. But knowing what you value says a lot about how you relate to your work, to your time, to what you consider worthy of your attention.

    What kind of tools are you building, or choosing, for yourself?

    → 6:57 PM, Feb 18
  • AI slop isn’t coming from the AI. It’s coming from the decision to skip the editing, to treat the tool like a magic button. AI amplifies your process. If your process is “generate and go,” that’s what shows up. Are you reviewing what you publish, or are you just hoping it’s good enough?

    → 9:22 AM, Feb 18
  • Audio conversations promote a more equal distribution of speaking time and alignment of speaking patterns, which leads to better group performance than video meetings, despite the absence of visual cues

    When we are physically in the same space, we use various subtle signals to manage speaking turns: eye contact, facial expressions, and body language. This leads to a more even distribution of turns among everyone. In audio conversations, these visual cues are absent, so you focus entirely on auditory signals such as pitch, speaking speed, pauses, and small sounds indicating someone wants to respond. In video meetings, which theoretically offer more data than audio conversations, we pay less attention to audio cues, while visual signals are less perceivable on a screen. Interestingly, the limitation of audio conversations actually ensures you listen better to auditory signals and respond more effectively than in video conversations. This results in everyone having more equal speaking time, preventing anyone from easily dominating the conversation.

    This more equal distribution of turns positively affects what researchers call ‘prosodic synchrony’: you better align your rhythm, intonation, and speech patterns. When you have more equal turns in an audio conversation, you naturally become more attuned in how you speak. You unconsciously mimic each other in tempo and pitch, resulting in a smoother conversation. This alignment contributes to what is called ‘collective intelligence’: the ability of a group to solve problems together. Teams using only audio are more successful in synchronizing their voice usage and turns, and when they achieve this, they perform better as a group.

    Although video theoretically offers more information, in practice, visual signals are less perceivable than in in-person meetings. This raises questions about the assumption that richer media are always better for remote collaboration. For certain forms of teamwork, audio meetings can be more effective than video conferencing.

    Related

    • A videoconference provokes a narrow focus on the screen, which hinders divergent thinking
    → 9:53 AM, Feb 17
  • It’s Monday. You have been warned!

    @HG21C on Parkinson’s Law, the Law of Triviality and the Law of Delay

    → 9:34 AM, Feb 16
  • If you have some time this weekend read AI companies will fail. We can salvage something from the wreckage by Cory Doctorow. I don’t agree with everything he writes, but it sure got me thinking…

    → 1:14 PM, Feb 14
  • This is very nice!

    I used the free Map to Poster service to generate a beautiful black and white map poster of Lleida.

    A detailed black and white street map of Lleida, Spain, showcasing its roads and geographical layout.

    → 8:39 AM, Feb 6
  • Por qué hemos tardado dos años en volver a hablar de inteligencia artificial

    ¿Cuántas veces has abierto ChatGPT esta semana? ¿Y la semana pasada? Si eres como la mayoría de profesionales con los que trabajamos, la respuesta probablemente sea «muchas». La inteligencia artificial ya no es algo que está por llegar. Está aquí, en tu día a día, ayudándote a redactar correos, resumir documentos o pensar en voz alta sobre un problema complejo.

    Y sin embargo, nosotros hemos estado callados.

    Bueno, casi callados. En 2023 publicamos el episodio 259 Prepárate para la llegada de la inteligencia artificial. Compartimos nuestras primeras impresiones, exploramos posibilidades, hablamos de lo que veíamos venir. Y desde entonces… silencio. Nada más sobre el tema en casi dos años.

    No ha sido por falta de interés. Ni porque no estemos usando IA a diario. De hecho, en nuestras sesiones de coaching ayudamos constantemente a nuestros clientes a integrar estas herramientas en sus sistemas de trabajo. Lo hacemos cada semana.

    Entonces, ¿por qué el silencio?

    El problema de lo efímero

    Porque teníamos miedo de escribir algo que dejara de ser útil en tres meses.

    La tecnología avanza a un ritmo que da vértigo. Lo que hoy es una funcionalidad revolucionaria, mañana es estándar. Lo que hoy requiere un tutorial detallado, dentro de seis meses ni siquiera existe porque la interfaz ha cambiado por completo.

    Y nosotros no queríamos contribuir al ruido.

    No queríamos crear contenido que te hiciera invertir tu tiempo en aprender algo que pronto quedaría obsoleto. Esto nos preocupa especialmente con productos de pago, donde la sensación de haber comprado algo que ya no sirve es frustrante. Pero también con contenido gratuito, porque tu tiempo no es gratuito. Cuando dedicas media hora a leer o escuchar algo nuestro, estás haciendo una inversión. Y queremos que esa inversión tenga el mejor retorno posible.

    Así que nos pusimos a trabajar en silencio.

    Lo que hemos descubierto

    Durante estos casi dos años, hemos estado buscando el enfoque que los anglosajones llaman evergreen. Contenido perenne. Ideas y principios que mantengan su valor independientemente de si usas ChatGPT, Claude, Gemini o la herramienta que venga después.

    Hemos observado qué funciona y qué no en las sesiones con nuestros clientes. Hemos visto patrones. Hemos identificado los errores comunes, las resistencias naturales, los momentos de iluminación. Y sobre todo, hemos entendido qué hace que la IA sea realmente útil para tu efectividad personal, más allá del asombro inicial.

    Creemos que hemos encontrado ese enfoque.

    No se trata de enseñarte a usar una herramienta específica. Se trata de ayudarte a pensar sobre la IA de una manera que seguirá siendo válida dentro de dos, cinco o diez años. Se trata de principios, no de tutoriales. De estrategia, no de tácticas que caducan.

    Muy pronto vas a ver cómo se refleja todo esto en el contenido que compartiremos. Y cuando lo veas, esperamos que sientas que tu tiempo está bien invertido.

    ¿Qué esperas tú de la IA en tu trabajo? ¿Buscas trucos rápidos o quieres entender cómo integrarla de verdad en tu manera de trabajar?

    → 6:27 PM, Feb 5
  • Currently reading: El infinito en un junco by Irene Vallejo 📚

    → 11:52 AM, Feb 5
  • Finished reading: The 3 Alarms by Eric Partaker 📚

    This book should have been a micro-post…

    → 8:43 AM, Jan 29
  • Remember when ChatGPT first appeared and you thought everything would change overnight?

    Three years later, most knowledge workers still just use it for spell checking and fancier Google searches.

    The tools are ready. The breakthrough isn’t.

    When do you think it actually tips?

    → 5:49 PM, Jan 26
  • You know that moment when you think, “I’ll just send a quick voice message, that’ll be faster”?

    Yeah, that never works out for me.

    I start recording. Stumble over my words. Delete. Try again. Forget what I wanted to say halfway through. Delete again. Third attempt, I finally get it right, but now I’ve rambled for two minutes about something that needed three sentences.

    Meanwhile, typing the same message? Done in 30 seconds. Straight to the point. No awkward pauses, no “uhm” sounds, no deleting and starting over.

    I’ve timed it. Typing is literally four times faster for me than recording a voice message. Plus, when I type, I stay focused. When I record, I somehow turn into a podcast host who forgot their script.

    Don’t get me wrong, voice messages have their place. Sometimes you need that context, that tone, that human touch. But for quick updates? Give me a keyboard any day.

    Do you have the same problem, or am I the only one who needs three takes to say “running 10 minutes late”?

    → 10:49 AM, Jan 21
  • Currently reading: Sell Like Crazy by Sabri Suby 📚

    I’m not expecting much, most marketing books are pretty shallow and mostly written to sell the author’s services. But someone recommended this one to me, so I’ll at least skim through it.

    → 12:29 PM, Jan 15
  • In my mind, webinar is a synonym for sales pitch

    → 10:05 AM, Jan 15
  • My writing style has always been ‘robotic’, but now they accuse me of AI slop.

    → 8:29 PM, Jan 9
  • How Tana's voice chat changed the way I work through ideas

    “So what I’m hearing is that you’re trying to balance structure with flexibility. Is that right?”

    I pause. That’s exactly what I’ve been circling around for the last two minutes without quite landing on it. “Yes,” I say. “That’s it.”

    I’m not talking to a person. I’m talking to Tana’s voice chat, and it just did something I didn’t expect: it helped me think.

    Last September, I wrote about how voice interfaces were improving my productivity. The combination of good transcription and AI cleanup meant I could finally get my rambling thoughts out of my head and into usable text. That was a game changer.

    But this is different.

    The difference is the conversation

    With voice transcription, I’m essentially dictating into the void. I talk, the AI cleans it up, and I get a polished version of whatever I managed to articulate. It’s useful, but it’s still one-way.

    Tana’s voice chat is interactive. It listens, yes, but then it responds. It asks questions. It reflects back what it’s hearing. It offers feedback. And that changes everything.

    When you’re just talking into a recorder, you’re limited by your own ability to structure your thoughts in real time. You can only develop ideas as far as your internal dialogue can take you. But when something asks you clarifying questions or points out connections you hadn’t noticed, you think differently. You go deeper, faster.

    I’ve been surprised by how much this helps. I’ll start with a vague idea, and through the back-and-forth, it becomes something clearer and more developed than I would have reached on my own. It’s like having a thinking partner who’s infinitely patient and never gets tired of asking “Can you say more about that?”

    What actually makes it useful

    Three things stand out.

    First, the questions it asks are genuinely thoughtful. They’re not generic prompts. They’re based on what I’ve just said, and they push me to clarify or expand in ways that feel natural.

    You can also fine-tune how the voice chat behaves by adding custom prompts and instructions for each supertag in Tana. This means the assistant can adapt to different types of thinking sessions, whether you’re brainstorming, planning a project, or working through a problem.

    Second, the summary at the end. After the conversation, I get a concise overview of what we discussed. This is surprisingly valuable. It’s one thing to have a conversation; it’s another to see the key points distilled afterward. It helps me understand what I actually figured out.

    Third, the visual outputs. I can quickly generate flowcharts or sketches based on our discussion. This isn’t just a nice-to-have. Seeing ideas in visual form often reveals gaps or connections that weren’t obvious when they were just words.

    What this means for how I work

    I’ve used plenty of AI tools. Most of them are about output: write this email, summarize this document, generate this image. They’re useful, but they’re transactional.

    This feels different. It’s not about getting something done; it’s about thinking better. The tool isn’t replacing my thinking; it’s scaffolding it. It’s giving me a structure to develop ideas I couldn’t have developed as well on my own.

    I’m sure I’ll be using this more often. Not for everything, but for the moments when I need to work through something complex or when I’m stuck and need a different angle.

    If you’ve ever wished you had someone to talk through an idea with, but didn’t want to bother an actual person at 11 PM on a Tuesday, this might be worth trying.

    What’s your experience with AI thinking partners? Have you found something that helps you think, not just produce?

    A mobile screen displays an app interface with a summary of professional tasks and a small visual representation of a business plan below.

    → 6:27 PM, Jan 9
  • Finished reading: Click by Jake Knapp 📚

    How do you know if what you’re building is actually what people want? Most teams skip this question and jump straight into execution.

    “Click” by Jake Knapp and John Zeratsky (the prequel to Sprint) offers a practical answer: the Foundation Sprint, a two-day process to develop your product hypothesis before you build anything. It focuses on understanding customer needs, leveraging your team’s strengths, and differentiating from competitors.

    I’m planning to use some of their tools when we define our next product. Not following the method 100%, but cherry-picking what makes sense for us.

    Have you read it? What’s your approach to validating ideas before building?

    → 8:13 PM, Jan 8
  • Mobile notifications cause significant distraction and performance reduction due to prolonged mind-wandering

    Mobile phones are a major source of distraction in our daily lives, even when we are not actively using them. Research shows that just a notification, a beep, or a vibration significantly disrupts our performance on tasks that require concentration. Remarkably, this disruption is as great as when someone is actually on a call or texting. You don’t even need to look at your phone to get distracted; that little sound or vibration is enough to lose your focus.

    The reason these short notifications have such a big impact lies in what happens in our minds afterward. A notification triggers a series of thoughts unrelated to what you are doing. You start to wonder who sent a message, what’s in it, if it’s important… in short, your thoughts wander. This phenomenon, also known as ‘mind wandering’, lasts much longer than the notification itself. Even after the sound is gone, those thoughts linger in your head, and your performance continues to decline.

    This explains why distraction by your phone is so persistent: it’s not just about the time spent checking your phone, but mainly about the mental interruption that lasts much longer. Our limited attention capacity must be divided over multiple things simultaneously, and when a part of it is consumed by thoughts about that notification, less is left for the task that requires our full attention.

    Related:

    • Attempting to do two or more attention-demanding tasks simultaneously reduces productivity
    • Avoid excessive task switching to increase efficiency
    • We pick up our phones over 350 times per day, almost four times more than in 2019
    → 8:47 PM, Jan 7
  • Finished reading: Piensa claro by Kiko Llaneras 📚

    I thought interpreting data wasn’t that hard. I know the basic rule: correlation doesn’t imply causation. But Kiko Llaneras’s book has made it clear that my biases go way beyond what I thought.

    The example that blew my mind: a graph showed a negative relationship between a football player’s value and their defensive actions. Until it was split by country. Suddenly, each country showed a positive relationship. The same data, two completely opposite stories depending on how you look at it. If this can happen with something as measurable as football, how many times do we misinterpret data in our daily lives without even realizing it?

    → 5:05 PM, Jan 7
Page 1 of 2 Older Posts →
  • RSS
  • JSON Feed
  • Micro.blog