AI
How Voicenotes supercharged my note-taking workflow.
Until recently, I was using the Supernote to quickly get ideas out of my head, but that has changed. I’m now double-taping the back of my Pixel and recording them on Voicenotes.
I recently published a video explaining how the app works, and you can watch it below. Basically, you record something and the AI will transcribe it. The coolest part, though, is asking anything to Voicenotes and getting answers based on your notes.
Getting contextual answers was already great, but a recently released feature brought things to another level. Now, when you open a note by tapping or clicking its title, you’ll see a list of notes related to that note.
However, that was not available when I started using Voicenotes and was trying to figure out what the app could do. One of the tests I did was record all the ideas that came to mind while I was preparing myself to have a conversation with Jijo Sunny, Voicenotes co-founder.
My original plan was to ask the app to show me a summary of my ideas for the conversation. But, days before the interview recording, when I sat down to work on the script, I decided to use the new ‘Related Notes’ feature. It blew my mind.
Because I was always starting my notes with something similar to “more ideas for my conversation with Jijo” the app easily found all the related notes. Then, it was just a matter of pasting them into Obsidian and refining everything. This process saved me so much time, when compared to how I was writing the scripts before.
The best part was that I didn’t have to go to a specific note or folder in Obsidian to write down the ideas, nor did I need to “translate” them into written sentences. It was much faster because I could just talk about any of my video ideas. The order or where the notes were didn’t matter at all. In the end, the AI did a fantastic job putting them together.
The first attempt was so successful, I decided to try the process with other scripts, and it’s working flawlessly. All I have to do is remember to include the possible title or subject of the video to help the AI group the notes in the future.
I’m still using the Supernote for many other things, but this quick capture and future refinement using Obsidian has been working too well to ignore. You can see it for yourself. One of the final results of this new workflow is shown in the video below. Almost everything I asked Jijo was captured using Voicenotes.
But keep in mind that this workflow is just one possible scenario. I’ve been seeing so many use cases. If you’re already using it, tell us how, and if not, maybe give it a try (there’s a free plan). It might surprise you.
As for the double-tap, I’m not sure if that’s available on all Android phones, but on my Pixel it is under the “Gestures” setting, and it’s called “Quick tap to start action”.
🤯 My last week according to Voicenotes #AI. Yes, it got everything right. Based on my notes, of course.
A modern note-taking app that gets to the point | #Voicenotes
What if you could take all your notes using only your voice? What if AI could transcribe your voice notes, find any information, and even help you create lists, to-dos, blog posts, and more? For someone like me, Voicenotes reliance on AI is concerning. But maybe that’s precisely what you’ve been looking for.
What if organizing has become a waste of time?
It has been a few years since I started thinking about how useful organizing actually is. Every time I see something like the AI automations for Gmail shown on the I/O 2024, I am more convinced that organizing is becoming a waste of time.
Then, there’s the recently released Voicenotes, a note-taking app that, as far as I can tell, relies on AI for transcribing, summarizing, tweeting, creating lists, telling us anything about our past, and so much more. For someone like me, that’s a bit… unsettling, to say the least. But, the younger generation may be entirely comfortable with this idea. Anyway, full review of the app coming soon.
Back to my point. It might be time to stop worrying about organizing information.
The Large Action Model (LAM) running on Rabbit OS is a fascinating use of AI. But, if I’m being honest, I was really attracted by the retro vibe of the Rabbit R1 device itself. It’s so cool.
Technology is neither good nor bad...
It’s all about how we use it
The year is 2012, and I brought a real camera to the Evernote Conference to make sure I would end up with good pictures. It was a wise move, but there was a problem with this one picture.
The small display of the camera might have made me believe everything was okay, but the picture I asked someone passing by to take of Phil Libin and me ended up being blurry. I think you can imagine my disappointment when I transferred the pictures to my computer later that day. It was the only one I had with him, so I kept it.
Fast-forward to the era of AI…
The other day, I was searching for a picture to test the Google Photos unblur feature, and I thought of the one with Phil Libin. Google Photos fixed it in seconds with a single click. I’m impressed. That’s so cool.
Before and after pictures
Original picture from 2012
Picture fixed by AI in 2023