Goldman Sachs Report, AI Coding Tools, Music AI Lawsuit
Key Points
- Goldman Sachs released a stark report questioning the near‑term value of generative AI, contrasting its earlier optimistic claim of a 7% GDP boost with a now‑skeptical outlook that has sparked debate among the panelists.
- Developer Pietro Schirano launched “Cloud Engineer 2.0,” adding a code editor and execution agents to a command‑line tool, highlighting the next evolution of AI‑assisted coding and prompting discussion about who leads the Anthropic vs. OpenAI race.
- Panelists praised Claude Engineer’s goal‑oriented, agentic design as a glimpse of the industry’s future direction toward autonomous, task‑driven AI agents.
- The RIAA sued generative‑AI music services Suno and Udio for alleged mass copyright infringement, raising questions about how copyright law will shape AI training data and the broader creative‑AI ecosystem.
Sections
- AI Futures: Finance, Coding, Copyright - The episode previews three hot AI stories—a Goldman Sachs report questioning generative AI’s value, Pietro Schirano’s Cloud Engineer 2.0 coding assistant, and the RIAA’s lawsuit against AI music startups over copyright infringement.
- Debating AI Risk Estimates - The speakers reflect on the recent generative‑AI hype, reference Acemoglu’s claim that only 5% of tasks are truly at risk, and argue whether that estimate under‑ or over‑states the technology’s broader economic impact.
- Rapid Evolution of AI Landscape - The speaker reflects on the swift turnover of AI models—from Lama 2 and Claude to Falcon and agents—criticizes the notion of an AI bubble or lack of a “killer” app, and emphasizes that AI already permeates countless applications.
- AI Energy Demand vs Supply Constraints - The speakers compare the large power consumption of AI queries to traditional searches, debate whether energy availability will limit future AI deployment, and argue that algorithmic and hardware efficiencies will eventually alleviate these concerns while the real challenge becomes extracting value from the technology.
- Claude Engineer Opens New Coding Horizons - A discussion on Pietro Schirano’s open‑source Claude Engineer tool, which lets developers access Claude 3.5 Sonnet from the command line and adds agent‑driven features that could expand AI coding assistance beyond simple autocompletion toward on‑demand, Stack‑Exchange‑style help.
- Assistive vs Agentic Coding Tools - The speakers contrast Copilot’s line‑by‑line code suggestions with Claude Engineer’s agentic ability to scaffold entire applications and workflows.
- Ground Innovation Drives AI Competition - The speakers argue that real breakthroughs arise from on‑the‑ground engineers, noting a perceived shift toward Anthropic gaining an edge over OpenAI as developers build third‑party products on top of foundation models.
- AI-Generated Music Copyright Lawsuit - The speaker discusses the RIAA’s lawsuit over AI models training on copyrighted music, highlighting its significance as the first major case in the music arena and comparing it to evolving norms around ebook piracy.
- Focusing on Musical Output Infringement - The speakers explain how plaintiffs will target specific song elements such as chords and progressions, citing prior cases (e.g., Ed Sheeran, The Verve) to argue infringement, and argue that a fair‑use defense is unlikely to succeed.
- Synthetic Music and Copyright Workarounds - The speakers debate using AI‑generated music and similarity metrics to create legally distinct works, arguing that embedding spaces and fair‑use reasoning could sidestep infringement claims despite industry turmoil.
Full Transcript
# Goldman Sachs Report, AI Coding Tools, Music AI Lawsuit **Source:** [https://www.youtube.com/watch?v=Wlf6id2FH-Q](https://www.youtube.com/watch?v=Wlf6id2FH-Q) **Duration:** 00:31:14 ## Summary - Goldman Sachs released a stark report questioning the near‑term value of generative AI, contrasting its earlier optimistic claim of a 7% GDP boost with a now‑skeptical outlook that has sparked debate among the panelists. - Developer Pietro Schirano launched “Cloud Engineer 2.0,” adding a code editor and execution agents to a command‑line tool, highlighting the next evolution of AI‑assisted coding and prompting discussion about who leads the Anthropic vs. OpenAI race. - Panelists praised Claude Engineer’s goal‑oriented, agentic design as a glimpse of the industry’s future direction toward autonomous, task‑driven AI agents. - The RIAA sued generative‑AI music services Suno and Udio for alleged mass copyright infringement, raising questions about how copyright law will shape AI training data and the broader creative‑AI ecosystem. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=0s) **AI Futures: Finance, Coding, Copyright** - The episode previews three hot AI stories—a Goldman Sachs report questioning generative AI’s value, Pietro Schirano’s Cloud Engineer 2.0 coding assistant, and the RIAA’s lawsuit against AI music startups over copyright infringement. - [00:03:17](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=197s) **Debating AI Risk Estimates** - The speakers reflect on the recent generative‑AI hype, reference Acemoglu’s claim that only 5% of tasks are truly at risk, and argue whether that estimate under‑ or over‑states the technology’s broader economic impact. - [00:06:21](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=381s) **Rapid Evolution of AI Landscape** - The speaker reflects on the swift turnover of AI models—from Lama 2 and Claude to Falcon and agents—criticizes the notion of an AI bubble or lack of a “killer” app, and emphasizes that AI already permeates countless applications. - [00:09:30](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=570s) **AI Energy Demand vs Supply Constraints** - The speakers compare the large power consumption of AI queries to traditional searches, debate whether energy availability will limit future AI deployment, and argue that algorithmic and hardware efficiencies will eventually alleviate these concerns while the real challenge becomes extracting value from the technology. - [00:12:37](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=757s) **Claude Engineer Opens New Coding Horizons** - A discussion on Pietro Schirano’s open‑source Claude Engineer tool, which lets developers access Claude 3.5 Sonnet from the command line and adds agent‑driven features that could expand AI coding assistance beyond simple autocompletion toward on‑demand, Stack‑Exchange‑style help. - [00:15:42](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=942s) **Assistive vs Agentic Coding Tools** - The speakers contrast Copilot’s line‑by‑line code suggestions with Claude Engineer’s agentic ability to scaffold entire applications and workflows. - [00:18:47](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=1127s) **Ground Innovation Drives AI Competition** - The speakers argue that real breakthroughs arise from on‑the‑ground engineers, noting a perceived shift toward Anthropic gaining an edge over OpenAI as developers build third‑party products on top of foundation models. - [00:21:49](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=1309s) **AI-Generated Music Copyright Lawsuit** - The speaker discusses the RIAA’s lawsuit over AI models training on copyrighted music, highlighting its significance as the first major case in the music arena and comparing it to evolving norms around ebook piracy. - [00:24:54](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=1494s) **Focusing on Musical Output Infringement** - The speakers explain how plaintiffs will target specific song elements such as chords and progressions, citing prior cases (e.g., Ed Sheeran, The Verve) to argue infringement, and argue that a fair‑use defense is unlikely to succeed. - [00:28:01](https://www.youtube.com/watch?v=Wlf6id2FH-Q&t=1681s) **Synthetic Music and Copyright Workarounds** - The speakers debate using AI‑generated music and similarity metrics to create legally distinct works, arguing that embedding spaces and fair‑use reasoning could sidestep infringement claims despite industry turmoil. ## Full Transcript
Tim Hwang: Hello and happy Friday.
You're listening to Mixture of Experts.
I'm your host, Tim Huang.
Each week, Mixture of Experts brings together a wide range of specialists to
separate the AI signal from the AI noise.
We tackle the biggest stories of the week and distill them down
to just what you need to know.
This week on the show, three top headlines.
First, the bank's way in.
Goldman Sachs is out with a harsh report on the future of generative
AI, claiming the space still has a long way to go to prove its value.
Are the bankers out of touch?
Or do we think they've got some good points?
Brent Smolinski: Feels like they went from one extreme to
the other a little bit, right?
Like, I almost feel like there's something in the middle.
Tim Hwang: Second, AI developer Pietro Schirano is out with Cloud Engineer 2.
0, which adds a code editor and code execution agents to an already
powerful command line interface tool.
What does the next stage of coding assistance look like?
And who's currently winning in the Anthropic Open AI matchup?
Chris Hay: The interesting thing about Claude Engineer, it's really
embraced the agent methodology.
It's agentic, it is goal oriented.
And I think that's where we really are going to be going as an industry.
Third, the Recording
Tim Hwang: Industry Association of America, or RIAA, has launched a
lawsuit against generative AI music companies Suno and Udio, claiming
mass copyright infringement.
How might copyright shape the generative AI space, and what does it
mean for the future of training data?
Marina Danilevsky: They're not going to get their way, and at
some point in time, they're going to have to learn to live with it.
Tim Hwang: As always, I'm joined by an incredible group of panelists
that will navigate what has been another action packed week in AI.
Today, we've got Chris Hay, Distinguished Engineer, CTO, Customer Transformation,
Marina Danilevski, Senior Research Scientist, and joining us for the very
first time, Brent Smolenski, Global Head of Tech, Data and AI strategy.
Thanks for having me.
So first up, I want to change a little bit of what we do typically,
and I want to ask you all a yes or no question, and then we're going to
actually dive into the story of the week, which is the Goldman Sachs report.
And the question is this, is AI a bubble?
Chris?
No, definitely no.
Marina.
Marina, what do you think?
Marina Danilevsky: Yeah, I know generative AI a little bit.
Tim Hwang: And Brent, what do you think?
Yes, sort of.
Well, with those extremely definitive answers, let's move
on to our story for the week.
Brent Smolinski: Listen, I think if you look back, You know, a little over a year
ago, when Goldman Sachs first published their, uh, article on kind of the impact
of that, that generative AI would have on the market, they predicted something
like 7 percent GDP lift, uh, which is, just as context, that's the size of
the North America healthcare market.
That's a massive impact.
I think now what we're starting to see is, uh, in their last publication.
They're beginning to backpedal on that.
Tim Hwang: Yeah, and I think it's a great intro.
I think that's exactly what I wanted to talk about.
I mean, just last week, or I think just a few weeks ago now, Goldman
Sachs released its update, Brent, that you're kind of referring to,
and they kind of fessed up, right?
I think the end conclusion of that report is that the current state
of generative AI is Quote, too much spend for too little benefits.
Um, and this follows on the heels of some other cautious statements
coming out of Sequoia, obviously a prominent VC fund, and McKinsey, which
works with a huge number of companies and has been kind of like at the
forefront of, I think, pushing sort of generative AI as an enterprise use case.
And that's kind of where I want to start today is just to kind of like go into this
kind of moment of, you know, hesitation.
I mean, I look at the last 24 months have been crazy growth in
generative AI and crazy excitement.
Um, but I think now the industry is kind of like almost thinking a little bit
about like, okay, so what happens next?
And I guess Marina, maybe I'll throw it to you next.
I mean, one of the numbers that was most striking to me that was
kind of cited in the Goldman Sachs report was that, you know, they, they
talked to Darren Acemoglu, who's this kind of prominent MIT economist who
estimates only about 5 percent of tasks are really genuinely at risk.
from what's been happening in generative AI.
I guess, Mary, do you buy that as an estimate as someone who kind of works
on the technical side of all this?
Like, do you see capabilities, you know, really becoming much broader over time
or, or really do you think this estimate is kind of inaccurate of like, you
know, the kinds of tasks that are really going to be at risk in the economy?
Marina Danilevsky: I think the state at which the tech is right now, I'm actually
not that far off from what, uh, what he says as well, what Darren says, um,
there's still a lot to be thought of as far as things that we could do with
this technology, but it's very clear that we haven't quite thought of it yet.
So when it comes to, you know, is it a bubble right now?
Yeah, a little bit as far as the hype versus what the capabilities actually
are, what the reliability actually is.
I think we need to continue to think.
Something that's always really interesting about core technological research is
you don't know what the applications are sometimes until sometime later.
So it's always very interesting to push those boundaries, but yeah, there's
gotta be a difference between hype and actual usability, especially when
it comes to things that are reliable.
At the moment, it's good at, as an accelerant.
It's good to speed up.
People and tasks that they're kind of doing right now.
All right, but that's not enough Does it
Tim Hwang: like justify the valuation of NVIDIA as the most, you know, the
most valuable company, um in the world?
I mean, I guess chris, you know, I think I recall last few times
you've been on the show You've always been our hard bitten cynic.
Um, I don't know if you are Uh, uh, agreed with Brent and Marina here, or
if you're more of a contrarian, like you actually feel like you're more optimistic
than what these bankers are saying, because what do bankers know anyways?
Chris Hay: I love the hype.
We wouldn't have this podcast if there wasn't any
Tim Hwang: hype.
Chris Hay: So no, I enjoy this every few weeks, the hype has to stay.
I read the report though, and I think, I can't remember who said
it, but one of the guys said, uh, nothing's going to happen for the next.
10 years.
You know, this generative AI is a complete waste of time.
And, and I was just thinking about like back to the 60s when someone said, you
know, there's only a need for maybe five mainframes in the entire world.
And I'm just like, Oh my goodness, I wouldn't want to
be writing that on that report.
I, I wouldn't want to be quoted in 10 years time being the guy that said
generative AI was a waste of time.
So, uh, now I think it is early.
Obviously, um, but the technology is progressing so fast.
I mean, I, I was talking to a customer yesterday and I brought up the models that
were popular this time last year, right?
And if you think about this time last year, right, Lama 2 just came out.
There was no such model from Mistral.
Right.
The, the granite models were just out at this point.
Claude two, it just came out, nevermind called three, five sonnet, right?
There was no turbos and open AI.
So, and we were talking about the Falcon models.
We were talking about the vacuna.
Nobody talks about these anymore.
The, the, everything has moved so fast.
And then this year we're like agents, agents, agents.
So.
If I look at the kind of time frame they're talking about, next three
to five years, ten years, this industry is moving so fast and the
capabilities are getting so much better.
I, I, I'm happy for them to say it's a bubble because that's going
to create more space for people to get on and do the work, so.
Like more opportunity.
Yeah, exactly, but this is not going away.
That's for sure.
Yeah,
Brent Smolinski: I mean it feels like they went from one extreme
to the other a little bit, right?
Like, I almost feel like there's something in the middle.
I think one of the analysts said that there's no killer AI, application of AI.
I mean, that to me seems like an odd statement, right?
Because, I mean, first of all, we AI permeates.
Chat GPT.
Yeah, I mean, well, I think that's what he meant, right?
Like, I think what they're getting at is, is, is really, when they say AI,
they meant large language models, right?
But the reality is, is AI permeates, like, so much of our, uh, applications today.
Uh, I, I mean, it's just, uh, I mean, even in this, uh, This session that
we have right now, AI is being used to do signal processing, clean up
the videos, and so on and so forth.
So, I mean, AI permeates just about everything we, we interact, all
applications we interact with today.
So, so I don't quite get that statement.
I think he, what he meant was, uh, large language models.
Marina Danilevsky: I know I agree with you a lot, actually.
AI has been around for a very long time and does a lot of
interesting and good things.
AI just means, hey, the computer's doing something useful.
There's some kind of, you know, statistics processing happening.
And generative AI is actually a relatively small part of that.
Yeah.
So it's not that fair to take AI and say, okay, this is the
only AI that matters anymore.
Yeah, it's the one we're paying attention to, but it's actually
built on the shoulders of giants.
in some way.
There's been so much work going on for so many decades.
Brent Smolinski: So the relentless march of AI progress, right?
It's just, you know, the technology continues to evolve and continues to
permeate our application landscape in ways people don't even realize.
Tim Hwang: Yeah.
And I think that is something that we like do forget quite a bit is
that You know, for a long time it was like, eh, what's happening in NLP?
Everything's about computer vision.
That's the really exciting thing.
Or like, everything's about reinforcement learning.
That's really how we're going to get to, you know, next generation systems.
And then kind of just like, everything sort of like flipped
in a very unexpected way.
It reminds me of this tweet that I saw that I was amazing.
So there's this adage in financial markets, which is like, the
market can be irrational longer than you can stay solvent.
And the person's tweet was basically that like, um, you know,
a sigmoid can stay exponential for longer than you can say solvent.
So like, you know, which I think is just, you know, it's
like, Beautiful in some ways.
I think one thing I did want to touch on in the report is that it does
focus on some interesting potential constraints on growth, which I
think are really genuine, right?
Like, I think we debate about how far the tech can go in the economy.
But I think one of the most interesting stats they cited was the idea that,
you know, per query, the power draw for something like OpenAI is
like 10 times the power draw for something like a query like Google.
And, you know, it is true that like Energy is becoming kind of a
constraint on these things, right?
Like, if you want to run mega, mega, mega clusters, it actually just
turns out that, like, in the United States, there's actually, like, only
a few places that have the physical plant that's necessary to do this.
And I guess I'm kind of curious if you all sort of buy that, is that we, we may,
you know, I kind of buy the argument that, like, well, are we demand constrained?
It's kind of anyone's guess, but we may very well become supply constrained.
Brent Smolinski: Yeah, I mean, listen, uh, these same kind of arguments were applied
with cloud computing like 10 years ago.
People were worried about power consumption, yet we're able to build out
the infrastructure, solve these problems.
power problems.
I would argue even the algorithms underlying a lot of these models
are improving and becoming much more efficient, which translates
into computational efficiency, which translates into energy efficiency.
So I, I think these problems will get solved, right?
I think in my mind, the biggest problem to figure out is how do
I get value out of this, right?
Once we kind of begin cracking the kind of the AI, the value problem with
a, you know, applying some of these, these large kind of transformer based
architectures to real world business problems, I think that's going to
unlock a floodgate of demand, right?
Um, and then at that point, at that point, we can begin talking
about the supply constraint.
But right now, I think it's a second order problem to think about.
And I'm.
Very confident this problem will get solved.
Tim Hwang: Marina, I'm curious if I could turn to you as just a kind of a
question on the last thought on the story.
I mean, is it right to say, maybe the right way of thinking about this, and
I don't know if you agree with the statement, is that, you know, there
may very well be a bubble in something like language models, but I think we
should doubt whether or not there is a bubble in sort of AI writ large.
I don't know if you'd agree with that as kind of a way of sort of framing up.
You know, what's going on here?
Marina Danilevsky: I don't think there's a bubble in AI.
I think there's Ecclesiastes seasons.
We have winters, we have summers, and it goes, and it goes.
So right now, there's a lot of attention.
But also, I'm like, all right, I was doing NLP before it was cool.
I'm gonna be doing NLP after it stops being cool.
Like, Those of us that are on the ground are just going to continue to push and
that's where these things come from.
Sometimes it becomes of interest to people, sometimes it doesn't.
Um, to the thing that you had said before, you know, does it make
sense to throw a large language model at every single query?
Maybe not, but I think right now because the technology is early, everybody's
just seeing, let's see what it can do.
Let's test it as much as we can and it will eventually settle into, it's no
longer a hammer in search of a nail, it'll settle into something that we're just
as comfortable with as with search when it first, uh, started being a big thing
and everybody's like, Oh, we're done.
No more information organization necessary.
We've solved it.
No, but it's very, very useful nonetheless.
So yeah, I think, I think we're in a season.
The season will pass.
Tim Hwang: So for our next segment, I think The thing I really wanted to
focus on was that there was a cool little thing that was being passed
around, uh, Twitter fairly recently.
So Pietro Schirano, who is this AI engineer and kind of a serial
entrepreneur based out in the Bay Area, um, just updated a project that
he maintains, an open source project called Claude Engineer, um, and it's
basically an open source project that allows coders to use Claude 3.
5 Sonnet, um, from the command line.
And, you know, what I love about this project is there's a bunch of kind of
creative features running under the hood.
You know, he's playing around with agents and he's playing around with
just like a bunch of these kind of like little quality of life improvements.
And you know, again, this is not a big release from an anthropic or an
open AI, the kinds of things that we've talked about in the past.
But I do really think it's kind of interesting because, you know, I think
we've been so locked into like co pilot as like thinking about how coding
assistance works with generative AI.
Right. Yeah.
And I think what Claude Engineer playing around with is to say, well, actually, in
the future, we might want to do more than just like predictive code, like kind of
like stack exchange on demand, basically.
Um, and so, you know, I guess, Chris, I see you nodding.
Maybe I'll go to you first is, you know, as, as I'm wondering if you could
explain to people who are listening for maybe non expert, not coding day in, day
out, like what is the kind of promise?
Do you see anything sort of interesting in what's happening with Claude Engineer?
Like, what does the future of kind of coding assistance with AI look like?
Um, and if there's like particular things you think are cool in Claude Engineer
product or project or, or otherwise, just be kind of curious about how you think
this kind of whole interface evolves.
Chris Hay: Yeah, I think it's really interesting what he's
done with Claude Engineer.
It is so simple.
It is literally just a command line application.
You run it in the terminal in VS Code, so no extensions or anything like that.
You put in your cloud key, and then it uses all of the tools
that you would normally have with agents running in the background.
So you give it a task, a goal, and then it can create folders on your machine.
It can go and create entire files, and then it can stitch that all together
to help you build entire applications.
And when I think about this for a second, Copilot is very typically a
kind of prescriptive model in the same way as we chat with our interfaces.
The interesting thing about Claude Engineer, it's really embraced
the agent methodology, it's agentic, it is goal oriented, and
I think that's where we really are going to be going as an industry.
So rather than me sitting there typing in a couple of letters, you know, waiting
for copilot to come back for a response and then gives me a bit of a code
segment, I don't like it, I delete it, and then I sit and pause again, you know.
The co pilot pause is going to go away and we're going to give these
agents goals and tasks and they're going to come back and help us build
entire applications and, and, and really sort of start to orchestrate
and, and build workflows there.
And what's really going to happen, I love Claude Engineer, but I suspect co pilot
is just going to steal all of that and build it into their extension anyway.
Tim Hwang: Yeah, that's right.
I mean, I think, yeah, that is one really interesting element of this is
like how much projects like this can survive going forwards because they just
get absorbed directly into the product.
I guess, Chris, maybe if I can kind of turn the screw one more time, I think
there's one sort of comment that you just had there about basically like Copilot
being very prescriptive in nature.
Um, and do you want to talk a little bit more about that?
I guess what you're kind of saying is that like when you use Copilot, it literally
recommends the code that you should be using as an engineer versus, I guess.
Where you're contrasting here with, uh, Sharana's project is more just that,
like, you're specifying more of an objective, and it's kind of like assisting
you in getting to that objective.
Is that, is that the distinction you're
Chris Hay: assistive as opposed to agentic.
So when I'm in Copilot, I will type a comment or the first couple of
letters, and then I kind of wait.
So, uh, It's not giving me an end to end goal.
It's not building me an entire application.
It's, it's really just a smart, um, uh, IntelliPrompt to, to be honest, right?
So it's then just going to complete, uh, the piece of code that I'm writing.
So maybe deal with that at a function level, it might deal with it at a line
level, whereas in an agentic approach and with Claude Engineer, it's, yeah.
Starting to scaffold entire applications and entire workflows and orchestration.
And that's a completely different mindset from what Copilot has today.
And I think that's the big shift that's happening.
Um, again, it's really simple.
It just runs on a command line.
It's beautiful.
Um, But I think a lot of people are going to riff off of that and we're going
to get tons of tools and I'm excited.
Tim Hwang: Marina, I'm curious if you have any thoughts kind of on where some
of this goes and in particular I was sort of interested because you know what's
cool about it is I think what Chris is kind of coming back to over and over
again which is sort of like it's just in the command line right like it's
almost like unfancy it's like and we don't have to make a big deal about the
AI being part of your coding experience it's just like in the command line.
But I'm kind of curious about like What else you think might be coming down the
pike with this kind of project, uh, in particular, you know, I think one of
the reasons I think we were excited to have you on the panel for this episode
is like, you know, starting to combine stuff like, okay, well, we have agents,
and then we've also got rag, and then we've got, you know, there's a bunch of
things that you think can start to connect together, um, and, uh, and just kind of
curious about how you think, like, you know, these types of patterns go going
forwards for, for coding assistants.
Marina Danilevsky: Two directions.
One is, uh, by engineers for engineers, which this is a much more of a
by engineers for engineers thing.
Like why do we have IDEs?
Why do we have more than one?
A lot of these things really got created by people because they
say, look, I know my workflow better than you know my workflow.
I'm going to create tools that work for me.
Other people are then going to be able to make use of it
and say, yeah, that's great.
People used to have the, you know, You know, Emacs versus Vim fight.
Now we have, you know, Eclipse versus VS code versus whatever.
But it really, most of those features do come from people saying,
this is something that's helpful to me and I'm going to do it.
So that's where this project sort of falls for me on the flip side.
When you start to be able to combine things.
We might finally have something interesting going on in the low
code, no code space, which up to now has been like, isn't it great?
We can, you know, arrange some visual blocks and you stick it together.
And that's like programming.
And you're Tim Hwang: like programming.
Marina Danilevsky: You're programming now.
No, you're not.
Um, so we might actually be finally seeing something kind of interesting there.
Although again, the persona is different, so you do have to design different things.
But this goes to the fact that most of these things really come
from, I think, individuals for even if Microsoft adopts it later.
It's still the people on the ground that come up with the idea and go,
Okay, this is what actually works, guys.
Here, do it this way.
That's where, you know, that kind of innovation comes, in my opinion.
Tim Hwang: Yeah, for sure.
Yeah, I'm looking forward to this gen creating, like, a new generation of
endless nerd fights that are like Vim versus Emacs, but Dating myself, yeah.
Yeah.
Um, Brent, I guess I'm curious if you want to zoom up for us a little bit and
kind of talk about this in the context of the broader competition, right?
So I see this as kind of like, you know, at least for me, I think the
vibe shift has been Anthropic is now ahead of OpenAI a little bit, right?
Like they're, they're the cool cats.
They're doing the really interesting things, but I think a big part
of the battle is like, What we're seeing here with Claude Engineer right
is like our third party engineers being like, this is so cool.
I'm going to design my own third party product on top of like the services
these foundation models are providing.
And yeah, it's kind of curious about your take here about like this evolving
competition between open AI and anthropic I guess ultimately for the
hearts and minds of like engineers that are producing you know code out
there in the world and and if you've got a feeling on like who's winning,
who's advantaged, who's up, who's down.
Brent Smolinski: Well, it certainly feels like things are shifting
towards anthropic in, in cloud.
That's for sure.
I think a big part of it, I mean, is the economics, the cost effectiveness
of these cloud models is, uh, there's, there's significantly more
cost effective than, than open AI.
And so with many of my clients, they're actually, um, moving
away from, uh, open AI towards, towards cloud for that very reason.
That's really interesting.
All I do have to say is, is we recently did an, an engagement with, um, the
senior executive team, uh, at one of our clients, and we developed, uh, the
team developed this amazing prototype.
It was an RFP, uh, generator, and they were able to develop this in,
like, three weeks or I mean, some incredibly short period of time.
There was a very powerful, almost, I mean, you can almost use it
to generate these, these RFPs.
There's a few tweaks you'd have to make at the edges.
And I think everybody was, was blown away, um, uh, by how quickly they
were able to Pull this application together and again a lot of it.
They were they built this on cloud using a lot of these cogeneration tools as well
Tim Hwang: Well, I'm gonna move us to the final segment of today and I apologize As
a as a person who trained as an attorney, I'm always like watching the legal side
of all this and so I was very Curious to see and get the opinions of the panel on a
story that just happened a few weeks back.
Um, the Recording Industry Association of America, or RIAA, is basically,
um, sort of the music industry's representatives, lobbyists,
advocates, um, in the United States.
Um, and they launched a high profile lawsuit against two companies, uh, Suno
and Udio, which are these two companies that are in the generative music space.
So kind of the idea, if you've played around with a, uh, product
like Suno, um, you download the app.
You basically say, I want.
a song that matches the following characteristics and it just generates the
song and it's, it's actually quite good.
Um, and presages this kind of really strange world where you're just like,
you know, you like Taylor Swift.
Cool.
You can just get, you know, in a hundred hours, a thousand hours of Taylor Swift
sounding noise basically going forwards.
Um, and, uh, the RIAA sued both of these companies essentially claiming,
uh, copyright infringement, right?
And a big part of their claim leans on the fact that these
companies are training on music.
That is, that are ostensibly owned by rights holders.
Um, and so we're about to see this big showdown.
You know, similar versions of this lawsuit have popped up around OpenAI
and Anthropic and other companies, but I think this is the first time we've seen
a really high profile one happen around music, which I think is very interesting.
Um, and I think the other thing that's very interesting to me is, um, you
know, how it's going to evolve, right?
Like, you know, for example, in the book space, right?
For like Kindle, you know, like, I feel like there was a period of time where
basically, you know, the kind of like piracy didn't really kind of take off.
And so we have certain norms around ebooks that we don't
have around, say, music, right?
Um, and I think basically, You know, we're starting to see that
evolution happen around different generative AI applications.
Um, and I guess, Marina, I kind of want to toss it to you is like, as
someone who's kind of a researcher in the space, training models in
the space, you know, I think the big question for me is, you know, How do
you think about these kinds of lawsuits?
Right?
Because I think there's one point of view, which is, well, look, if the
RAAA gets its way, there's kind of no way to do these products just because
of the sheer number of music files you need to put into these kinds of models
to get them to have high performance.
Um, do you think that's the case?
Or am I kind of like overstating the risks here?
Marina Danilevsky: Um, I mean, I think this, as usual, is going to revolve
around discussions of fair use.
And if you train on the music just as if you train on the text and then you
throw it away and you just keep the features and the weights for the model,
what if that counts, what if it doesn't?
Um, but I, first of all, again, cat's out of the bag, people are
going to do it anyway, so you got to figure out a way to do it.
Um, second, this reminds me of, you know, discussions of, well, what
if somebody posts something on the internet platform, I'm going to
misremember what the legal thing is.
It's the DM something or another of like,
Tim Hwang: DMCA, yeah, the copyright,
Marina Danilevsky: yes, of the like, you can't sue me just because somebody
put something bad on my platform.
I don't know.
It just reminds me of the same kind of thing where people are going
to continue to do the technology.
You're going to have to find a way around it.
The RIA is going to push for what they're going to push for.
They're not going to get their way, and at some point in time, they're going to have
to learn to live with it, because again, you can't stop people from, from doing it.
Tim Hwang: Yeah, I mean, it does remind me a little bit of the early 2000s, right,
where, you know, Napster came up, and file sharing became a thing, and the RIAA did
the same thing, which is like high profile lawsuits against file sharers, and then,
You know, I guess Marina's your point.
It didn't really stop filesharing.
But it also didn't break the music industry, right?
Uh huh.
That's right.
Chris Hay: So I, I think the RIAA or whatever their acronym is, is
gonna wipe apart Sona and UDU.
I, they are gonna win this.
So I, I spent this morning reading that.
And they went after the angle that I thought they would go after, which
is, Of course, they were going with the inputs, but, but if you look at
the actual complaint, they focus in on the outputs, the individual songs.
So they brought up, um, I think one of them was kind of
Chuck Berry's Johnny be good.
And then they brought up, um, some other one in one of the other
complaints and they brought up the kind of the musical chords.
And then they were like, this has this style and this has this style.
These notes are identical and this is why they're going to win.
And, and.
The reason it's going to win is there's prior cases, uh, and Tim you can speak
about this a lot more, where you've seen like the Ed Sheeran case where he had to
prove that he didn't steal from this one, and then there was that one where, um,
I think the Verve had stolen some lick from like the 1960s and they couldn't
play their song forever long, and so there is prior on not being able to use
outputs that have got similar chords, got similar musical progressions, yeah.
And they're going to hit them with that and they're going to win and
there's literally nothing they're going to be able to do about it.
So even if you don't win on the, if you make the fair use argument that
you make with books, that's not going to hold true on the outputs because
you're just going to point to prior case law and say, well, actually,
this was a copyright infringement.
This was a copyright infringement.
And then you're going to have to pay for all of those outputs.
So what.
What will probably happen with the generative AI there is they're
going to have to then start to check the outputs of songs to see
it doesn't infringe existing songs.
So I think it's going to get super messy, but they're going to win, big star.
Tim Hwang: Yeah, and I think that the messiness I think is really
interesting because You know, so I used to do a bunch of work and still do
around trust and safety on AI, right?
And there you're trying to say like, well, we're going to use RLHF and
we're going to create all these mechanisms to try to like constrain
the behavior of the model, right?
And a lot of what you learn is basically like anything you try to do to kind
of like prevent model behavior or like block the impermissible behavior.
There's lots and lots of ways of subverting, right?
Particularly against sort of like a user that's adversarial.
Um, and I think part of the worry that I have here is sure.
You know, you're setting up a world where it's like, look, your model can
output stuff that sounds almost exactly like this copyrighted training data.
But then basically you're saying, okay, company, now you're
responsible for preventing that.
And I guess I would ask the question of like, is that actually possible?
Like, I think from a technical standpoint, like we don't have a whole lot of examples
of being able to kind of like really categorically block or in the very least,
it kind of begs the question, when is an output so close to the training data that
it really should be a copyright violation?
I think that's kind of an open
Chris Hay: question.
I think it's I think it's a hard thing, right, because there's so much music.
They don't, as far as, um, the record companies are concerned,
it doesn't matter for them.
They're just gonna sue each, anytime they find an infringement, they're
just gonna sue the company, right?
And it's gonna be so difficult that it's not gonna be worth anyone's while.
Um, so, I, I said I think, I think it's going to be interesting and hard.
I think this case is different.
Maybe, maybe it turns out not to be the case and maybe we'll generate
so much music and maybe synthetic data will actually be the solution
to this because you just therefore invent a completely new style of
music that isn't based on the past.
And then, and then the outputs are not going to infringe.
Maybe that's the solution.
But Uh, it's definitely going to get messy.
I, I just don't see these companies surviving it.
Marina Danilevsky: Maybe not these two, but I think the actual
technology is going to survive.
They might kill these two, but I agree with what you said in the end,
actually, Chris, which is, okay, so you find some way to figure out
what is a sim dissimilar enough distance between music that it's okay.
And you just do that by looking at all the music that's out there and saying,
well, these two are, you know, this far apart, so you can't say anything.
There's already been years and years and years of study for this.
We've got Pandora and Spotify and how do we do radio and how do we do
recommendations and all the rest of it.
We've got an embedding space to work with.
We've got things to do there.
So they'll just keep pushing to the point of that.
It's absurd to have the RA complain about a really specific thing.
And.
And that's what I'm gonna be
Chris Hay: bought on Marina, right?
That is the way around us, because then you get to make the fair use argument
again, because you're saying, Well, I'm sampling these different cases.
I'm not infringing anybody's copyright.
So I totally agree.
I think we're going to end up with a new style of music.
And, and that will be the interesting thing.
Brent Smolinski: Chris, you know, you bring up a very
interesting, you know, argument.
And, you know, the question I have is, is how creative can
these platforms actually be?
What do they give?
It is what they create truly original or can it truly be original?
Uh, and and then the other question I have too is, you know, this this
platform Uh providers, uh, you know, they're just platform providers, right?
And so is the question around, you know should these platform providers be held
liable for for these, um, you know for for the content that's created or should
be the People creating the content.
Tim Hwang: Yeah, for sure.
And I think we will see eventually stuff like, I mean, on YouTube right
now, there's this really interesting thing, which is a form of content ID.
So the idea is, well, if you want to use copyrighted music,
you can have it in your video.
And basically there's like a royalty that gets paid out if it's detected
that you're using this kind of audio.
And so there kind of could be really, these really interesting models
that sort of emerge where it's like, well, you know, you're allowed to do,
you know, a Katy Perry sound alike.
Like she just gets some kind of paid off and then I mean, you know, Marina to your
kind of comment, the really interesting question ends up being, well, how do
you figure out the compensation based on its closeness in the embedding, right?
Like, is this 10 percent Kanye, 30 percent Katy Perry, 10 percent Taylor's
like, how would we actually go about like designing that kind of embedding
space is going to be like a super, super interesting engineering problem.
Um, so, uh, as usual, uh, we have more things to talk about
than we have time to talk about.
Um, but we are out of time for today, so, uh, Chris, Marina, Brent,
thank you for coming on the show.
Uh, as always, it's an awesome discussion, and we'll have to
have you all back at some point.
Thanks for joining us.
If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify,
and podcast platforms everywhere, and we will see you all, uh, next week.