Spellcasters - Feb 29th, 2024
Unlike magic, AI is real and that scares people. What happens when we automate everything with machines? How much control are we willing to give up to the machines? What happens, if we don’t know how the machines work anymore? Big week for AI and crypto this week.
Jensen Huang, CEO of Nvidia, argues that we should stop saying kids should learn to code: “Huang argued that, even at this early stage of the artificial intelligence (AI) revolution, programming is no longer a vital skill. With coding taken care of by AI, humans can instead focus on more valuable expertise like biology, education, manufacturing, or farming, reasoned the Nvidia head.” I disagree with him, because the purpose of learning to code is how to solve problems and that is a valuable skill. Whether that is a viable and financial lucrative career, hard to tell and that might be his point.
Gemini refuses to show images of white people: This is interesting for two reasons. First, there is clearly an issue with their internal process how they test, incorporate feedback and how removed the exec team from the product is. Second, it also diverges from their core mission to “organizing the world’s information to make it universally accessible and useful” by becoming an activist company. Mario Juric says it best: “Gemini is […] a reflection of the values people who built it […] G's Search -- for all its issues -- has been perceived as a good tool, because it focused on providing accurate and useful information. Its mission was aligned with the users' goals ("get me to the correct answer for the stuff I need, and fast!").” Google is for various reason (talent, data and infrastructure) extremely well positioned to lead the AI movement, but they continue to fuck up. To be fair though, they have shown in the last year that they can ship faster and capable AI products.
The principle of 'Just because we can, doesn’t mean we should' led to an interesting conversation I had with Josh Nussbaum regarding the appropriate use of AI. Josh argued that AI does not necessarily make individuals better coders, and those relying on AI cannot surpass the skills of proficient coders. First, of all we don’t need good coders for everything and second whether he is right or not, more people than ever build stuff with the help of AI including myself. I have learnt so much faster how to code, because I could simply talk to an AI 24/7. The broader implication of AI is that everyone gains from coding with AI. Enabling coding through natural language interaction opens up possibilities for a wider audience to engage and build with these systems.
Arc Browser’s “pinch-to-summarize” feature on the phone is actually great. It’s not perfect, but such a good UX that makes so much sense on the phone. I have experimented with summaries in the past and while in theory they make sense, they rarely capture the essence of an article/podcast/video very well and kind of commoditize information. With everything in AI, I think the right way to go about is can we use AI to assist the user. Could it not provide additional information on an article? Could it not mark specific parts of an article? Could it provide an “opinion” on the article? At least Casey Newton is not a fan of it as this :)
Sam Lessin: “It is such great news that bitcoin just crossed 60K and basically no one cares.”