26 Jul 2025 ------------ Thoughts: Asynchronous Intelligence I believe everyone notices that there are now 3 types of people in the world: AI opposer, AI supporter and AI neutral. I think I am more on the AI neutral side, so as my (not many) friends; The online communities that I usually visit have mostly AI opposers; At work, a fair amount of my colleagues and my friends' are AI supporters. In chatrooms, what I heard the most from AI opposers is that "AI's outputs are rubbish" or "Someone's success is a false one because they didn't know what they actually did but relied on AI". AI supporters side is much much simpler, like "why learn programming? After all you can't even write better programs than AI". They are both correct I assume, and the ugly fact behind the facade is, I am afraid, a much clearer gap between the ordinaries and the elites. Why I am saying this is because, AI or LLM can easily make a person without the related skills to think that one can perform a task without what it requires. Let's ignore technical details like how AI generates answers and hallucinations, but what most testimonies about using LLM to write programs are simply suggesting that "you don't even need to know what syntax means to write your own app". I am going to coin this word - I will say that AI agent/LLM can help a person achieve "1 AI" competence level in relatively short amount of time. If you think about what those AI advocates are always saying, like 80-90% of human employees will be replaced by AI, they are based on an assumption that AI can perform as good as, if not better than, an average employee. So what they are actually saying is that, you don't need to hire an expert who is <= 1 AI competent if a random fresh graduate can easily be 1 AI competent. I once used a coding assistant called Codeium, for just a few days. The very first impression I had was not too bad, because I saved a LOT of typing. The generated codes were ranged from 0% to 99% correct. I never just used what the coding assistant generated, but always looked and changed whatever I thought was not correct/good enough. I saved a LOT of time though. Later I tried "vibe coding", to see how it feels like. The program was a WebP to PNG converter. What I did was to ask the LLM to generate the complete code, compile it, copy the error messages to the LLM and ask it to fix. I spent one hour on it, didn't work out, got fed up, and fixed the code in 15 minutes. I strongly prefer myself to be > 1 AI competent, and from the vibe coding experiment I assume I am. I am just so surprised that there are more and more vibe coders who are willing to do this job this inefficiently. Don't get me wrong: if I am tasked to do something I have no knowledge of, such as writing an SOP document for a lab test, I will want to rely on AI and let me be like I am doing a proper job. I won't be able to tell whether the AI has generated some bullshit or a masterpiece with my 0.1 AI competence level, but it will be of timely help. There was recently a meme on Lemmy, which was a short comic. A person named Poory is holding $60 and asking an artist to draw something. The proud-looking artist replied "No Poory, it will cost $260", so Poory said "Okay" and turn to an AI, leaving the desperated artist yelling "NOOOOOOOOOOO!!!". Some can say that human artists are greedy and AI is giving them their well- deserved punishment. Some can say that people easily ignore the rightful gain of professionals for providing their effort. Well, again let's ignore the moral matters, I am thinking if one can't tell the difference between a $60 and a $260 work, they deserve a $60 work, isn't it?! That's the thing I want to say - more and more people nowadays can't tell the difference between the result from an expert and an AI output, so 1 AI competence level is "good enough" for them! I feel absolutely terrified when sometimes I see or hear that someone fully delegate a task, which they are supposed to be the expert and very good at, to an AI agent/tool. I feel super disturbed when I try to understand how they can feel okay being 1 AI competent. I am extremely worried when I imagine that I couldn't find just one thing about me that is > 1 AI competent. But it looks to me like, in future the "ordinaries" (or a better term), who are satisfied with results that are of 1 AI quality, will very happily use AI agents to handle all daily tasks, like in the film "Wall-E". The "elites", who perform and demand quality that is at least 3-4 AI, will give up on pursuing the "ordinaries" to aim for higher goals and simply stay within their own circles. It is what it is. I am not going to fully accept or deny AI. I bought a Rabbit R1 at a very affordable price. I am going to experiment on how it would change me. I might simply bring it with me everyday without using it or I might too become a super AI lover who can't do anything without AI. I also plan to give it to my kid later, if Rabbit happens to be still in operation. That said, I don't want my kid to be the "ordinaries". I think using AI is a modern skill that youngsters should know, but not to fully rely on. What I want him to do is very simple, and I believe many of us are practising it everyday - use AI to help him become 1 AI competent quickly, then learn to become 1.1 AI, 1.2 AI, and, to infinity and beyond. Don't make what he does an "asynchronous intelligence", but what he truly knows and practises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p.s. My real contingence plan about the Rabbit R1 is to flash Ubuntu Touch into it!