|
Sheel Mohnot / @pitdesi: @dwarkesh_sp @ilyasut I thought we had a few more years! [image] Dwarkesh Patel / @dwarkesh_sp: The @ilyasut episode 0:00:00 - Explaining model jaggedness 0:09:39 - Emotions and value functions 0:18:49 - What are we scaling? 0:25:13 - Why humans generalize better than models 0:35:45 - Straight-shotting superintelligence 0:46:47 - SSI's model will learn from deployment [video] Prinz / @deredleritt3r: Ilya Sutskever on the mountain he's climbing with SSI: 1. Dwarkesh and Ilya discuss how easy it is for a teenager to learn how to drive - takes 10 hours. Ilya: “People exhibit great reliability, robustness, and ability to learn in a domain that did not exist until recently Rohan Anil / @_arohan_: The phrase “we are in the age of research” goes hard. Xander Dunn / @xanderai: I find your lack of faith in deep learning disturbing. [image] Dwarkesh Patel / @dwarkesh_sp: Ilya on research taste: “One thing that guides me personally is an aesthetic of how AI should be by thinking about how people are. There's no room for ugliness. It's just beauty, simplicity, elegance, with correct inspiration from the brain. The more they are present, the more [video] Khurram Javed / @kjaved_: I am pleasantly surprised by Ilya. He has identified some key aspects of intelligence that are largely absent from the popular AI discourse. These are: 1. Intelligence is about the ability to learn and not about knowing many things. The right goal is a system that can learn Jackson Dahl / @jacksondahl: “How things should be.” “Looking for... beauty and simplicity. Ugliness, there's no room for ugliness. It's beauty, simplicity, elegance, correct inspiration from the brain. All those things need to be present at the same time.” [image] Shweta / @shweta_ai: Ilya transcended vague poasting and unlocked the final form, vague podcasting Gary Marcus / @garymarcus: Truly uncanny how week after week eminent guests on the @dwarkesh_sp show from Sutton to Karpathy are converging on what I have been saying for years. The latest is @ilyasut, who has converged with me on the deficiencies of neural networks relative humans in generalization Nabeel S. Qureshi / @nabeelqu: Fascinating from the new Ilya interview: he says the age of scaling (2020-2025) is over, and we're back to looking for more breakthroughs ("the age of research"). [image] @zephyr_z9: Ilya popped the AI bubble It's over Max Zeff / @zeffmax: 😬😬Ilya on rejecting Mark Zuckerberg's acquisition offer, but losing his co-founder Daniel Gross to Meta Superintelligence Labs in the process: “As a result, [Gross] was able to enjoy a lot of near-term liquidity, and he was the only person from SSI to join Meta.” [image] @scaling01: Ilya Sutskever implies that we are in a bubble: “Scaling sucked out all of the air in the room” “We are in a world where there are more companies than ideas by quite a bit” [video] Dwarkesh Patel / @dwarkesh_sp: “The thing that happened with AGI and pretraining is that in some sense they overshot the target. You will realize that a human being is not an AGI. Because a human being lacks a huge amount of knowledge. Instead, we rely on continual learning. If I produce a super intelligent 15 -year -old, they don't know very much at all. A great student, very eager. [You can say,] ‘You go and be a programmer. You go and be a doctor. Go and learn.’... Yann LeCun / @ylecun: 🤣 [Quote posting a Flirting vs. Harassment meme: Ilya Sutskever: “Scaling is over and LLMs are a dead end” “aww, You're sweet”, Yann LeCun: “Scaling is over and LLMs are a dead end” “Hello, Human Resources?"] @thealokverse: Some key takeaways from Ilya Sutskever's latest podcast: 1. We are moving from the age of scaling to the age of research. Bigger models are not enough anymore. 2. Future progress will come from smarter training methods, not just more compute. 3. Today's models pass exams and Sebastian Raschka / @rasbt: Ok, so what Ilya saw was extreme benchmaxxing, which in turn prompted him to create his own company to do LLM development the proper way?! Makes sense, I sympathize with that. Karina Nguyen / @karinanguyen_: “Maybe what it suggests is that the value function of humans is modulated by emotions in some important way that's hardcoded by evolution. And maybe that's important for people to be effective in the world. ...there is this complexity-robustness tradeoff, where complex things Dwarkesh Patel / @dwarkesh_sp: “One of the very confusing things about the models right now: how to reconcile the fact that they are doing so well on evals. And you look at the evals and you go, ‘Those are pretty hard evals.’ But the economic impact seems to be dramatically behind. There is [a possible] [video] Minqi Jiang / @minqijiang: Seems like the Ilya interview revealed a lot about SSI's technical direction. I really like the approach he hinted at. If I am interpreting it correctly, it's an area that has been actively explored in academia but not by the large labs at significant scale. Hamel Husain / @hamelhusain: The Gemini 3 commercial in this podcast is next level. It's elite level product placement that I haven't seen before Respect 🫡 Robert Scoble / @scobleizer: I agree. Incredible interview by @dwarkesh_sp of @ilyasut. I could listen to both for months and not get bored. It's like being at a great university and hearing the best professor. I love X. This just LIT UP the AI community. If you aren't watching my AI lists in X Pro you are @seconds_0: Incredible interview. Dwarkesh deserves all the accolades he is getting as a sophisticated and intelligent interviewer willing to push back and ask good questions. Ilya was a great guest. Your time would be well spent listening here. Yonathan Arbel / @profarbel: Interesting to see a person radiate wisdom without using any abstruse jargon or any grammatical flourishes, by just like being a zealot for simplicity @basedjensen: What ilya is trying to say in so many words is you gotta have faith anon. Without faith nothing happens @blerkmerk: Ilya Sutskever is the Oppenheimer of our generation Dwarkesh Patel / @dwarkesh_sp: “From 2012 to 2020, it was the age of research. From 2020 to 2025, it was the age of scaling. Is the belief that if you just 100x the scale, everything would be transformed? I don't think that's true. It's back to the age of research again, just with big computers.” @ilyasut [video] Alex Volkov / @altryne: Ilya is like me fr How is there such a disparity between hard benchmarks and agents (even the best ones) doing the same shit all over again? [video] @scaling01: Ilya Sutskever, who coined the term “feel the AGI” at OpenAI, is no longer feeling the AGI [image] Marcos Gorgojo / @marcosgorgojo: It is always great to hear Ilya's views. I've just created a quick map if you want to follow along. [image] Justin Lokos / @justinlokos: “Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling... But now the scale is so big... So it's back to the age of research again, just with big computers.” Ariel Ekgren / @aryomo: Long time since long form Ilya, much exciting! Mr. Paradox / @mrpaaradox: Illya speaking about Performance on Evals could be a big reason for the reality of current AI models. Dileep George / @dileeplearning: No kidding! There is indeed something called data-efficiency. Data-augmentation is a crutch that serves to hide, but not solve, the real fundamental inefficiencies. [image] @scaling01: Ilya acknowledges the Google Gemini team's success in scaling up pre-training, but still maintains that pre-training is dead and that we have entered a new era focused on research [video] Rihard Jarc / @rihardjarc: Ilya Sutskever just said that when it comes to AI models, we are back at the age of research & ending the age of scaling. What he is telling us is that more compute at this point won't help us get much better models; we need new breakthroughs. Not something that the semi [video] @scaling01: Ilya Sutskever: We are no longer in the age of scaling, we are back to the age of research [image] (责任编辑:) |
