Technology.
00:22
It's advancing faster
00:24
and taking less time to be widely adopted
00:25
than ever before,
00:27
like as in it took roughly 10,000 years
00:30
to go from writing to printing press,
00:32
but only about 500 more to get to email.
00:34
Now it seems we're at the dawn of a new age,
00:38
the age of A.I...
00:39
Artificial Intelligence.
00:41
Please define.
00:42
[automated voice speaking]
00:48
Uh-huh, okay. There you have it.
00:50
What does it mean? I don't know.
00:52
Tons of folks are working on it, right?
00:53
Most people don't know that much about it,
00:55
and of course, there's no shortage
00:56
of data or opinions.
00:58
Anyway, I've heard it said
00:59
that the best way to learn about a subject
01:01
is to teach it,
01:02
but to level with ya,
01:04
I have a wildly incomplete education...
01:07
Not in my day job,
01:08
where I've been A.I.-adjacent for over a decade.
01:11
Anyway, I figured now would be as good a time as any
01:13
to catch up on the state of things
01:15
regarding this emerging phenomenon.
01:17
My sense of it is it kind of feels like
01:20
Pandora's box, maybe... ish?
01:23
Much of my understanding on this topic
01:24
has come from sci-fi stories,
01:26
which usually depict us
01:28
heading toward Shangri-La or dystopia.
01:30
Like most things,
01:31
I suspect the truth is probably somewhere in the middle.
01:34
Now, along the way,
01:35
we'll demystify some common misconceptions
01:37
about things we thought we understood, but probably don't,
01:40
terms such as
01:42
"machine learning," "algorithms,"
01:44
"computer vision" and "Big Data,"
01:46
they will be conveniently unpacked
01:48
to help us feel like we know what we're doing,
01:51
kinda.
01:52
By the way, Pandora's box...
01:58
wasn't a box.
02:00
It...
02:02
was a clay jar.
02:04
How about that?
02:06
Demystified.
02:11
A.I. is teaching the machine,
02:14
and the machine becoming smart.
02:17
Each time we create a more powerful technology,
02:19
we create a bigger lever for changing the world.
02:22
[computer] Autonomous driving started.
02:24
[Downey] It's an extraordinary time,
02:26
one of unprecedented change and possibility.
02:30
To help us understand what's happening,
02:32
this series will look at innovators
02:34
pushing the boundaries of A.I...
02:35
No, stop!
02:37
[Downey] ...and how their groundbreaking work
02:39
is profoundly impacting our lives...
02:40
Yay! [laughing]
02:42
[Downey] ...and the world around us.
02:44
In this episode, we'll meet two different visionaries
02:47
exploring identity, creativity,
02:49
and collaboration between humans and machines.
02:52
Intelligence used to be the province of only humans,
02:55
but it no longer is.
02:56
We don't program the machines. They learn by themselves.
03:09
Mm. Ah. That's good.
03:12
All right.
03:14
My background's always been a mixture of art and science.
03:18
I ended up doing a PhD in bioengineering,
03:21
then I ended up in the film industry,
03:24
working on King Kong to Avatar,
03:27
simulating faces.
03:30
I'd got to a point in my career
03:31
where I'd been, you know,
03:32
lucky enough to win a couple of Academy Awards,
03:35
so I thought, "Okay, what happens
03:37
if we actually tried to bring those characters to life,
03:40
that actually you could interact with?"
03:43
[toddler crying]
03:45
Baby... Ooh.
03:47
[toddler fusses]
03:48
What can you see?
03:49
So "Baby X" is a lifelike simulation of a toddler.
03:54
Hey. Are you excited to be here?
03:57
She's actually seeing me through the web camera,
03:59
she's listening through the microphone.
04:02
Woo... yeah.
04:04
Baby X is about exploring the nature
04:07
of how would we build a digital consciousness,
04:09
if it's possible?
04:10
We don't know if it's possible,
04:12
but we're chipping away at that problem.
04:14
Hey, Baby. Hey.
04:15
[Downey] "Problem" is an understatement
04:17
for what Mark's chipping away at.
04:19
His vision of the future
04:20
is one where human and machine cooperate,
04:22
and the best way to achieve that, he thinks,
04:25
is to make A.I. as life-like as possible.
04:28
Peek-a-boo!
04:31
[Baby X giggling]
04:32
[Downey] Which is why he began where most life begins...
04:35
a baby...
04:36
modeled after his own daughter.
04:39
So if we start revealing her layers,
04:41
she's driven by virtual muscles,
04:43
and the virtual muscles, in turn,
04:45
are driven by a virtual brain.
04:47
Now, these are radically simplified models
04:49
from the real thing,
04:51
but nevertheless,
04:52
they're models that we can explore how they work,
04:54
because we have a real template that exists,
04:57
the human brain.
04:59
So, these are all driven by neural networks.
05:03
[Downey] "Neural network"
05:04
is a virtual, much simpler version
05:06
of the human brain.
05:07
The brain is the most complex system in our body.
05:11
It's got 85 billion neurons, each of which fire non-stop,
05:14
receiving, processing, and sending information.
05:19
Baby X's brain is nowhere near as complex,
05:22
but that's the goal.
05:23
Instead of neurons, it's got nodes.
05:26
The more the nodes are exposed to,
05:28
the more they learn.
05:30
[Sagar] What we've learned is it's very hard to build a digital brain,
05:32
but where we want to go with it
05:34
is we're trying to build a human-like A.I.
05:37
which has a flexible intelligence
05:39
that can relate to people.
05:41
I think the best kind of systems
05:43
are when humans and A.I. work together.
05:46
One of the biggest misconceptions of A.I.
05:49
is that there is a super-intelligent being,
05:52
or what we call a generalized A.I.,
05:54
that knows all, can do all,
05:56
smarter than all of us put together.
05:59
That is a total misconception.
06:01
A.I. is built on us.
06:03
A.I. is mimicking our thought processes.
06:06
A.I. is basically an emulation of us.
06:11
[Downey] Like visionaries before him, Mark's a dreamer.
06:13
The current state of his moonshot, however,
06:16
is a little more earthbound.
06:17
[computer] Thank you for granting access
06:19
to your microphone. It's good to hear you.
06:22
[Downey] Today, most avatars
06:23
are basically glorified customer-service reps.
06:26
[service avatar] Rest assured,
06:27
your health is my primary concern.
06:29
[Downey] They can answer simple questions
06:31
and give scripted responses.
06:33
I love helping our customers,
06:35
so I'm keen to keep learning.
06:36
[Downey] Beats dealing with automated phonelines for sure,
06:39
but it's a far cry from Mark's ultimate vision...
06:42
[Sagar] Hey, Baby. Hey.
06:43
[Downey] ...to create avatars that can actually learn,
06:46
interpret, and interact with the world around them,
06:49
like a real human.
06:51
What's this?
06:53
Spider.
06:55
So we're starting to get a spider forming in her mind here,
06:58
she's starting to associate the word with the image.
07:00
So, Baby... spider.
07:04
Spider.
07:05
Spider...
07:06
Good! Okay, what's this?
07:10
[Baby] Spider.
07:12
No. This is a duck.
07:14
Look at the duck.
07:15
[Baby] Duck.
07:17
[Sagar] Yeah.
07:18
[Downey] Baby X uses a type of A.I. called "object recognition."
07:23
Basically, it's how a computer sees...
07:27
how it identifies an object, like a spider,
07:29
or tells the difference between a spider and a duck.
07:33
It's something that you and I do naturally...
07:36
...but machines, like Baby X, need to learn from scratch,
07:39
by basically sifting through enormous piles of data
07:42
to search for patterns,
07:44
so that eventually, it can drive a car,
07:46
or pick out a criminal in a crowded photograph,
07:49
or tell the difference between me and... that guy.
07:52
[Sagar] But now I'm gonna tell her that spiders are scary.
07:55
Look out! Rawr! Scary spider! Rawr!
07:59
[crying]
08:00
Hey, hey. Don't cry. It's okay. Hey...
08:03
[Baby crying]
08:04
Hey, it's okay.
08:05
Now she's responding emotionally to me as well,
08:08
so we've gone all the way down
08:10
to virtual neurotransmitters, hormones, and so forth,
08:14
so Baby X has a stress system.
08:16
If I give her a fright...
08:18
Boo!
08:19
So we'll see basically
08:20
some noradrenaline was released then,
08:22
and she's gone into a much more vigilant state of mind.
08:25
[Downey] What Mark is working on
08:27
is known as "affective computing,"
08:29
A.I. that interprets and simulates human emotion.
08:33
I believe that machines are gonna interact with humans
08:36
just the way we interact with one another,
08:38
through perception, through conversation.
08:41
So as A.I. continues to become mainstream,
08:44
it needs to really understand humans,
08:46
and so we want to build emotion A.I.
08:49
that enables machines to have empathy.
08:51
Hello, Pepa.
08:53
-Hello. -[man] Hello.
08:55
-Hello. -Hello.
08:57
-Hello. -[laughing]
08:58
Oh, dear.
08:59
-We can do this forever. -I know we could. [laughs]
09:02
[Howard] They've showed, for example,
09:04
older adults who have A.I. aides at their nursing homes,
09:07
they are happier
09:08
with a robot that emotes and is social
09:10
than having no one there.
09:12
That's really the enhancement of human relationships.
09:16
[Sagar] Hey... Hello.
09:19
You know, human cooperation
09:20
is the most powerful force in human history, right?
09:23
Human cooperation with intelligent machines
09:26
will define the next era of history.
09:28
Using a machine which is connected
09:31
with the rest of the world through the Internet,
09:34
that can work as a creative, collaborative partner?
09:37
That's unbelievable.
09:47
[will.i.am] Jessica. Jessica. One more time, one more time.
09:50
We're gonna go from just the first two verses,
09:52
and the first two verses
09:53
will take us to three minutes, okay?
09:56
I love music.
09:57
The whole concept of music is collaboration,
09:59
so if there are some people that see me as a musician,
10:01
that's awesome.
10:06
I first became interested in A.I.
10:08
because A.I. is a very fruitful place to create in.
10:11
It's a new tool for us.
10:13
I dream, and make my dreams reality,
10:16
whether the dream is a song
10:17
or the dream is an avatar of myself.
10:20
One time, a friend was like, "Well, you can't clone yourself.
10:24
You can't be in two places at once."
10:25
That's the promise of the avatar.
10:29
I left it over there.
10:30
All right, here we go.
10:32
[Sagar] So, you're about to enter the Matrix.
10:35
I'm gonna sort of direct you through just a bunch of poses.
10:39
[will.i.am] The team from Soul Machines
10:41
is here to create a digital avatar of myself.
10:44
They had to put me in this huge contraption
10:46
with these crazy lights.
10:49
What do you want me to do?
10:50
[Sagar] Your face is an instrument.
10:52
All the wrinkles on the face is like a signature,
10:55
so we want to get
10:56
the highest-quality digital model of you that we can.
10:59
Okay. [chuckles]
11:01
[Sagar] Yeah, that's perfect. Okay, go.
11:03
[rapid shutters snapping]
11:06
[Sagar] So we have to capture all the textures of their face.
11:09
The geometry of their face...
11:11
Big, gnashy teeth.
11:13
How their face deforms
11:15
to form the different facial expressions.
11:17
And how about a kiss?
11:18
You could do...
11:19
With my eyes closed?
11:20
'Cause I don't kiss with my eyes open.
11:21
Every once in a while, I peek.
11:23
[cameras snapping]
11:25
I wanted to have
11:26
a digital avatar around the idea of Idatity,
11:29
and that's the marriage of my data and my identity.
11:32
Everyone's concerned about, like, identity theft.
11:35
Meanwhile, everybody's giving away all their data for free
11:38
on the Internet.
11:38
I'm what I like and what I don't like,
11:41
I'm where I go, I'm who I know.
11:43
I'm what I search. I am my thumbprint.
11:45
I am my data. That's who I am.
11:48
You pull your eyelids down like that.
11:49
We want to get that... yup.
11:51
[will.i.am] When I'm on Instagram and I'm on Google,
11:53
I'm actually programming those algorithms to better understand me.
11:56
Awesome.
11:58
In the future, my avatar's gonna be doing all that stuff,
12:00
because I'm gonna program it.
12:02
Get entertained through it, get information through it,
12:05
and you feel like
12:06
you're having a FaceTime with an intelligent entity.
12:09
[laughing] "Yo, check out this link."
12:11
"Oh, wow, that's crazy."
12:12
"Yo, can you post that on my Twitter?"
12:15
[laughter]
12:17
-Hey. -Hey.
12:19
All right, I'm the Soul Machines lead audio engineer.
12:22
Hopefully we'll be able to build an A.I. version of your voice.
12:26
After creating Will's look,
12:29
then we now have to create his voice.
12:31
For that, we actually have to capture a lot of samples
12:34
about how Will speaks,
12:36
and that's actually quite a challenging process.
12:39
-Shall we kick off? -Yeah, let's kick off.
12:41
-A'ight, boo, here we go. -Yeah.
12:43
I'm Will, and I'm happy to meet you.
12:44
I'm here to bring technology to life,
12:47
and let's talk about Artificial Intelligence.
12:50
Oops. Really? Whoa.
12:53
That's dope!
12:54
So there's so many ways of saying "dope," bro.
12:57
Yeah, yeah.
12:58
Now, how realistic is it going to be?
12:59
This will sound like you.
13:01
The sentences can be divided up into parts
13:04
so that we can create words
13:06
and build sentences, like LEGO blocks.
13:08
It will sound exactly like you.
13:11
Well, maybe we don't want to have it too accurate.
13:14
So you don't freak people out, maybe I don't want it accurate.
13:18
Maybe, there should be some type of...
13:20
"That's the A.I.,"
13:21
'cause this is all new ground.
13:23
-Yeah. -Like, we've...
13:25
we are in an intersection of a place
13:27
that we've never been in society,
13:28
where people have to determine
13:31
what's real and what's not.
13:35
[Downey] While Mark jets back to New Zealand
13:37
to try to create Will's digital doppelganger,
13:39
Will's left waiting, and wondering...
13:42
can Mark pull this off?
13:44
What does it mean
13:45
to have a lifelike avatar of you?
13:47
A digital replicant of yourself?
13:50
Is that a good idea?
13:52
How far is too far?
13:54
[Domingos] We've been collaborating with machines
13:56
since the dawn of technology.
13:58
I mean, even today,
14:00
in some sense, we are all cyborgs already.
14:02
For example,
14:03
you use OKCupid to find a date,
14:06
and then you use Yelp to decide where to go, you know,
14:09
what restaurant to go to,
14:10
and then you start driving your car,
14:12
but there's a GPS system that actually tells you where to go.
14:15
So the human and the machine decision-making
14:17
are very tightly interwoven,
14:19
and I think this will only increase as we go forward.
14:25
[Downey] Human collaboration with intelligent machines...
14:29
A different musician in a different town
14:31
with a different approach
14:32
is giving the same problem a shot.
14:34
[Gil Weinberg] People are concerned
14:36
about A.I. replacing humans,
14:38
and I think it is not only
14:40
not going to replace humans, it's going to enhance humans.
14:45
I'm Gil Weinberg. I'm the founding director
14:48
of Georgia Tech Center for Music Technology.
14:50
[plays piano]
14:51
Ready?
14:54
In my lab, we are trying to create the new technologies
14:57
that will explore new ways to be expressive...
15:00
to be creative...
15:02
Shimon, it's a marimba-playing robot.
15:05
[playing marimba]
15:08
What it does is listen to humans playing,
15:11
and it can improvise.
15:15
Shimon is our first robotic musician
15:18
that has the ability to find patterns,
15:20
so, machine learning.
15:23
Machine learning
15:25
is the ability to find patterns in data.
15:28
So, for example, if we feed Shimon Miles Davis,
15:31
it will try to see
15:32
what note is he likely to play after what note,
15:34
and once it finds its patterns, it can start to manipulate it,
15:38
and I can have the robot playing in a style
15:40
that maybe is 30% Miles Davis, 30% Bach,
15:43
30% Madonna, and 10% my own,
15:46
and create morphing of music that humans would never create.
15:50
[band playing tune]
15:55
[Downey] Gil's groundbreaking work
15:56
in artificial creativity and musical expression
15:59
has been performed by symphonies around the world...
16:03
...but his innovation
16:05
also caught the attention of another musician...
16:07
Okay.
16:08
[Downey] ...a guy who unexpectedly pushed Gil
16:10
beyond enhancing robots
16:12
to augmenting humans.
16:15
[Weinberg] I met Jason Barnes about six years ago,
16:17
when I was just about finishing one phase of developing Shimon,
16:20
and I was starting to think, "What's next?"
16:24
[Barnes] I got my first drum kit when I was 15, on Christmas,
16:27
and when I lost my limb, I was 22,
16:30
so I was kind of used to having two limbs.
16:34
I started trying to fabricate prosthetics
16:37
to try and get me back on the kit,
16:38
which eventually led me to working and collaborating with Georgia Tech.
16:41
[playing drums]
16:44
[Weinberg] He told me that he lost his arm,
16:46
he was devastated, he was depressed,
16:48
music was his life,
16:49
and he said, "I saw that you develop robotic musicians.
16:53
Can you use some of the technology that you have
16:55
in order to allow me to play again like I used to?"
16:59
So that's the prosthetic arm that we built for Jason.
17:02
When he came to us,
17:04
he just wanted to be able to use sensors here
17:06
so he can hold the stick tight or loose.
17:09
I suggested "Let's do that, but also,
17:12
let's have two sticks.
17:13
One stick can operate with a mind of its own,
17:15
understanding the music and improvising.
17:17
One stick can operate based on what you tell it with your muscle,
17:20
and also, each one of the sticks can play 20 hertz...
17:24
...faster than any humans,
17:26
and together, they can create polyrhythm,
17:27
create all kind of textures that humans cannot create."
17:31
All right. I think we're ready to play.
17:33
[all playing tune]
17:38
[Downey] In some ways, the robotic drum arm
17:40
allows Jason to play better than he ever has,
17:43
but it still lacks the true function,
17:45
or feeling, of a human hand.
17:47
[Weinberg] They don't provide
17:49
the kind of dexterity and subtle control
17:51
that would really allow anything.
17:55
[Downey] This revelation
17:56
drove Gil to his next innovation...
17:58
the Skywalker Hand.
18:02
Inspired by Luke Skywalker from Star Wars,
18:04
and created in collaboration with Jason,
18:07
the revolutionary tech
18:09
brings what was once the realm of sci-fi
18:11
a little closer to our galaxy.
18:13
[Barnes] This is just like a 3D-printed hand
18:15
that you can, like, download the files online.
18:18
[Downey] Currently, most advanced prosthetic hands
18:20
can't even thumbs-up or flip you the bird.
18:24
They can only open or grip,
18:26
using all five fingers at once.
18:28
Most of the prosthetics that are available on the market nowadays,
18:32
um, actually use EMG technology,
18:34
which stands for "electromyography,"
18:35
and essentially what it does is there are two sensors
18:38
that make contact with my residual limb,
18:40
and they pick up electrical signals from the muscles...
18:43
So again, when I flex and extend my residual limb,
18:46
it will open and close the hand,
18:47
um, and I can rotate as well,
18:50
but the problem with EMG
18:51
is it's a very vague electrical signal, so zero to 100%.
18:55
It's not very accurate at all.
18:56
The Skywalker Hand actually uses ultrasound tech.
18:59
Ultrasound provides an image,
19:01
and you can see everything that's going on inside of the arm.
19:04
[Downey] Ultrasound uses high-frequency sound waves
19:07
to capture live images from inside the body.
19:11
As Jason flexes his muscles
19:13
to move each of his missing fingers,
19:14
ultrasound generates live images that visualize his intention.
19:20
The A.I. then uses machine learning
19:23
to predict patterns,
19:24
letting a man who's lost one of his hands
19:26
move all five of his fingers individually,
19:29
even if he's as unpredictable as Keith Moon.
19:32
[Howard] The work that Gil is doing
19:34
is really important.
19:35
Gil comes from a non-engineering background,
19:37
which means that his technology
19:39
and the way he thinks about robotics
19:42
is actually quite different
19:43
than, say, the way I would think about it,
19:44
since I come from an engineering background.
19:46
And the commonality is that we want to design robots
19:49
to really impact and make a difference in the world.
19:53
[Weinberg] We were able to create a proof of concept
19:56
with Jason Barnes.
19:57
Once we discovered that we can do this with ultrasound,
20:01
immediately I looked at,
20:03
"Hey, let's try to help more people."
20:10
[Jay Schneider] That's okay, just leave me hanging, holding it.
20:13
It's not heavy or anything.
20:14
[Barnes] It's safe, if you want to slide it back...
20:15
No, no. I'm messing with you.
20:17
So I met Jason Barnes
20:18
at an event called "Lucky Fin Weekend."
20:20
They're a foundation that deals with limb difference.
20:23
There we go.
20:25
-Ah, all right. -And it's out.
20:27
[Schneider] Do you ever work on your car
20:29
without the hook?
20:30
Not really. It's just way easier and efficient for me to...
20:33
The hook, the hook really trips me out, though, man.
20:36
[Schneider] When I lost my hand,
20:38
it was close to 30 years ago,
20:39
and prosthetics were kind of stuck in the Dark Ages.
20:42
[rock drums and bass playing]
20:47
In general, they didn't really do a whole lot,
20:50
and even if they moved,
20:52
they seemed to be more passive than actually worthwhile to use.
20:58
I don't like to talk about my accident,
21:01
because I don't feel it defines me.
21:03
The narrative on limb-different people
21:05
has been the accident.
21:07
"This is what happened, and these are these sad things,"
21:10
and it becomes inspiration porn.
21:14
For me, for example, right, if I do something,
21:17
I have to, like, smash it out of the park,
21:19
because otherwise I feel like there's gonna be this,
21:21
"Oh, well, he did it good enough because he's missing his hand."
21:24
-Yeah, yeah. -And I'm like, "F that!"
21:25
Like, I want to... I'm gonna be as good or better than somebody with two hands
21:29
doing whatever I'm doing, you know?
21:32
Prosthetics, at this point in my life,
21:34
don't really seem like something I would want or need.
21:37
[Weinberg] Manual robotic prosthetics
21:40
have not been adopted well.
21:41
Amputees try them,
21:42
and then they don't continue to use them.
21:50
[Barnes] Yeah, man, you stoked to check out the lab?
21:53
Yeah, yeah, for sure.
21:54
Right now, I'm the only amputee that's ever used
21:57
the Skywalker Arm before.
21:58
Did you have... were you right-handed?
22:00
No, I was born left-handed, actually.
22:02
Oh, you lucky bastard.
22:03
-Yeah, I know, right? -I was right-handed.
22:05
[Barnes] It was extremely important
22:06
to get as many different people as we can in there,
22:09
including other amputees.
22:10
It's hard to find people that are amputees in general,
22:13
and then, like, upper-extremity amputees is the next thing,
22:16
and then finding people who are willing,
22:18
to step out of their comfort zone
22:20
-and then do this. -Right.
22:22
[Schneider] When I met Jason,
22:23
I found it really interesting that we had a lot in common,
22:26
because we were both into cars, we were both into music.
22:30
-Hi, Gil. -Hey. What's up?
22:31
-Jason. Nice to meet ya. -Nice meeting you.
22:33
He's a step or two ahead of me with the technology stuff.
22:36
[Barnes] The way this hand works is it essentially picks up
22:39
the ultrasound signals from my residual limb,
22:42
so when I move my index finger,
22:43
it'll move my index...
22:45
ring...
22:47
[Schneider] Wow, for the first time,
22:48
prosthetics are finally getting to the point
22:50
where they're getting pretty close
22:52
to actual human hand.
22:54
You know, it got me excited. I was like,
22:55
"This is the type of thing that I've been waiting for."
22:58
If I was ever going to try one again,
22:59
this would be the type of stuff that I would want to check out.
23:02
When I move my thumb...
23:04
[laughter]
23:08
I know from experience
23:10
that it's not always working perfectly.
23:12
It's very interesting for me to have someone else
23:14
who comes and tries our technology
23:16
to see if it can be generalized.
23:20
Is my arm getting warmer because you're wrapping it,
23:23
or does that have heat in it?
23:24
-It does have heat in it. -Oh, okay.
23:26
First thing we need, if we're gonna get Jay to try the hand,
23:29
is we need to get a custom-fit socket to his arm
23:32
that's comfortable and fits nice and snug.
23:34
You comfortable when they do this?
23:36
This is the most awkward part for me.
23:38
-Nah, it was kinda weird. -Ah, yeah. Yeah.
23:40
I was 12 years old when I lost my hand
23:42
and had a prosthetic for six months,
23:44
and pretty much ever since then, I haven't used it,
23:46
and it's been close to 30 years now.
23:48
And there's the impression of your arm.
23:50
That's way easier than I thought it was gonna be.
23:52
That's wild, yeah!
23:53
It may not be right for me, but this is something
23:56
that could really, really help people's lives.
23:58
It would be really cool
23:59
to have a hand in helping to develop the technology.
24:04
All right.
24:06
All right, ready?
24:08
Just slide it in.
24:10
Turn this... tighten.
24:12
[knob ratcheting]
24:13
How tight?
24:14
As tight as you can before it really hurts...
24:16
-Oh, really? -...because the tighter it is,
24:18
-the better reading we'll see. -Okay.
24:20
-Now we apply the probe... -Okay.
24:22
...so it can read your movements.
24:24
Now we also
24:25
have to work on the algorithm and the machine learning,
24:27
and for this, we will need you to train.
24:29
Okay.
24:30
An able-bodied person, when you move your finger,
24:33
you're not thinking about moving your finger,
24:35
you just do it, because that's how we're hardwired,
24:37
but, honestly, I don't really remember
24:39
what it was like to even have that hand.
24:41
[Weinberg] Even though an amputee doesn't have a thumb,
24:44
they still have the muscle.
24:45
You still have some kind of memory
24:48
of how you moved your fingers,
24:49
and you can think about moving your phantom fingers,
24:52
and the muscles would move accordingly,
24:54
and that's exactly what we use in order to, uh,
24:56
recreate the motion and put it in a prosthetic arm.
24:59
But does Jay still remember how to move fingers
25:03
that he didn't have for, I believe, 30 years ago?
25:06
Now we'll run the model,
25:08
and you'll be able to control the hand.
25:10
[chuckles] You're optimistic. I'm crossing fingers.
25:13
Can I cross these fingers? [laughs]
25:15
Is that... is that an option yet?
25:17
Having Jay here for a day
25:19
and hoping to get him to a point
25:21
that he controls finger by finger,
25:23
I'm a little concerned that it will not work
25:25
in such a short period of time.
25:28
Okay. And...
25:29
-Ready? -Yeah. You should try each of the fingers.
25:32
All right, that's the thumb...
25:35
-Oh, shit! -Unbelievable.
25:39
All right, index...
25:41
Yay!
25:42
Wow, I'm surprised.
25:44
Middle...
25:46
[Barnes] Dude.
25:50
Five for five?
25:53
-[all cheering] -All five of them!
25:56
-Whoa. -That's wild.
25:57
All right, let me do it again.
25:59
You're a natural, man.
26:00
Doesn't that feel crazy?
26:02
-Yeah! -Feels wild.
26:04
-I didn't think it'd be as good. -I didn't either.
26:06
He hit me in the back after it worked, so...
26:09
That's the first time.
26:10
[Schneider] It's like a game-changer, even in its infancy,
26:13
which is kind of insane,
26:14
because it can only get better from there.
26:17
And it's really cool to play a small part in that.
26:19
[Weinberg] Now we have two main goals.
26:21
First, you need to move your muscle or your phantom finger,
26:24
and immediately see response, so this is one direction of research.
26:28
The other direction is to make it more accurate.
26:31
Being able to type on a keyboard,
26:33
use a computer mouse, uh, open a water bottle,
26:35
things like that that most people take for granted.
26:38
It's kind of like a... you know, sci-fi movie, soon to be written.
26:41
-[laughter] -Give us five, right?
26:44
That's awkward... oh, robot to robot hand.
26:46
Nice!
26:49
-That's... that was real, right? -Yeah.
26:51
If I find out you guys had a button under that desk...
26:53
No, nah, I promise. I promise.
26:56
[Downey] What began as one man's pursuit
26:58
to innovate music through A.I. and robotics
27:01
unexpectedly became something much greater.
27:05
A human body cooperating with a bionic hand
27:08
is one thing...
27:09
but is it possible to humanize a machine
27:12
to the point that it truly seems lifelike?
27:15
Or is that still sci-fi, and far, far away?
27:25
[Greg] How did things go with Will?
27:26
[Sagar] You know, one of the real challenges there
27:29
was just getting enough material
27:30
that we could actually come back with.
27:33
We can't possibly capture somebody's real personality,
27:36
you know, that's impossible,
27:38
but in order for it to really work,
27:40
it's really important to capture a feeling of Will.
27:44
Right, so...
27:45
[Downey] Will's avatar is actually Mark's first go
27:48
at creating a digital copy of a real person.
27:51
Wow, that's looking pretty good.
27:53
[Downey] He's not just trying to clone a human,
27:56
by any stretch,
27:57
but trying to create an artificial stand-in
27:59
that's somewhat believable.
28:01
Still, like most firsts, it's bumpy,
28:04
and it's a cautious road into the unknown.
28:07
[tech] A big challenge that I've found
28:09
while I've been looking through a lot of the images
28:11
is it seems that Will was moving a lot during the shots.
28:14
[Colin Hodges] Okay. When we're building digital Will,
28:16
we have about eight artists on our team
28:19
that come together
28:20
and pull all of the different components
28:22
to bring together this real-time character
28:24
that's driven by the artificial intelligence
28:26
to behave like Will behaves.
28:29
Big challenges we've got
28:31
is how we create Will's personality.
28:34
Yeah. Like, the liveliness
28:35
and the energy that he generates,
28:37
and the excitement.
28:39
The facial hair was a challenge.
28:41
Because it's so sparse, it's quite tricky to get
28:44
the hair separated from the skin.
28:46
[Sagar] We have to be able to synthesize
28:48
the sort of feel that you're interacting with Will.
28:51
So, Teah, I've got some stuff to hear.
28:54
We've got 16 variations.
28:56
-16 variations? -Yeah.
28:58
[Sagar] We take the voice data that we've got,
29:01
and then we can enable the digital version of Will
29:03
to say all kinds of different things.
29:05
[digital Will] Here's the forecast.
29:07
Yo, check out the forecast.
29:08
Yo, check out the weather and shit.
29:10
Here's the weather. Check out the weather.
29:11
Yah, 'bout to make it rain!
29:13
Kinda.
29:14
[Sagar] That's fantastic... the words,
29:16
the delivery, emphasis...
29:18
Shows you just how complex people react.
29:23
[will.i.am] It's awesome where we are in the world of tech.
29:27
Scary where we are, as well.
29:29
My mind started thinking, like, "Wait a second here.
29:32
Why am I doing this?
29:34
What's the endgame?"
29:37
Because, eventually, I won't be around,
29:41
but it would.
29:43
[Downey] Will's endgame is more modest than Mark's:
29:45
a beefed-up Instagram following, a virtual assistant,
29:48
anything that might help him expand his creative outlets
29:51
or free up time for more creative or philanthropic pursuits.
29:57
Okay, so, here we go.
30:00
That's looking really different.
30:02
It's gonna be really interesting,
30:04
because, you know, it's not every day
30:06
you get confronted with your virtual self.
30:08
Right.
30:09
Does he feel that this is like him?
30:12
If it's not representative of him
30:14
or if he doesn't think it's authentic,
30:15
then he won't want to support it.
30:22
-What up, Mark? -Oh, hey, how are you?
30:24
-You can see me, right? -Yes.
30:26
Yo, wassup? This is will.i.am.
30:29
[laughing]
30:30
[Sagar] This is the new version of you.
30:31
We can give him glasses there.
30:33
[will.i.am laughs] That's awesome.
30:35
I remember I had a pimple on my face that day. You captured it.
30:38
The good thing is, it's digital,
30:40
and we can remove it really easily.
30:42
How come you didn't remove that? [laughs]
30:44
[Sagar] You can make him do a variety of things.
30:47
Let's play "Simon Says."
30:49
Say, "I sound like a girl."
30:50
I sound like a girl.
30:52
Say that with a higher pitch.
30:54
[high voice] I sound like a girl.
30:56
Raise your eyebrows.
30:59
Poke out your tongue.
31:00
[Will laughs]
31:02
[will.i.am] Tell me about growing up in Los Angeles.
31:05
I was born and raised in Boyle Heights,
31:06
which is west of east Los Angeles,
31:08
which is east of Hollywood.
31:10
Just east of downtown.
31:12
[will.i.am] Should it sound exactly like me?
31:14
Nope.
31:16
Should it sound a little bit robotic?
31:17
Yes. It should.
31:20
For my mom.
31:22
My mom should not be confused.
31:24
What's your name?
31:26
[in Spanish] Mi nombre es Will.
31:27
[in English] You speak Spanish?
31:29
I don't know.
31:30
[laughing]
31:31
I know it needs some fine-tuning,
31:33
but the way it's looking so far
31:35
is mind-blowing.
31:36
Thanks, Mark.
31:38
Yeah, no worries.
31:39
[Sagar] How far do you go down that path
31:41
until you can label it a living...
31:44
a digital living character?
31:47
This raises some of the deepest questions
31:50
in science and philosophy, actually,
31:53
you know, the nature of free will.
31:55
How do you actually
31:56
build a character which is truly autonomous?
31:58
Peek-a-boo!
32:01
[Baby X giggles]
32:02
What is free will? What does it take to do that?
32:05
[Weinberg] Artificial Intelligence
32:07
is crucial to the work we are doing,
32:09
to inspire, to surprise,
32:10
to push human creativity and abilities
32:13
to uncharted domains.
32:15
[all cheering]
32:16
Unbelievable.
32:18
[playing drums]
32:22
[Downey] Free will...
32:25
...it's something we've been grappling with
32:27
for thousands of years, from Aristotle to Descartes,
32:30
and will continue to grapple with for a thousand more.
32:33
Will we ever be able to make an A.I.
32:35
that can think on its own?
32:37
A second, artificial version of me
32:39
that is truly autonomous?
32:42
A Robert that can actually think and feel on his own,
32:45
while this Robert here takes a nap?
32:48
[engines roaring]
32:49
Impossible?
32:51
Well, when you consider
32:52
what human cooperation has already accomplished...
32:55
a man on the moon...
32:57
decoding the human genome...
32:59
discovering faraway galaxies...
33:02
I'd put my money on dreamers like Mark and Gil
33:05
over the "Earth is flat" folks any day.
33:09
Until then... nap time.
33:16
[man 1] Look at our world today.
33:18
Look at everything we've created.
33:21
Artificial Intelligence is gonna be
33:23
the technology that takes that to the next level.
33:26
[man 2] Artificial Intelligence can help us
33:28
to feed the world's population.
33:30
[man 3] The fact that we can find where famine might happen,
33:33
it's mind-blowing.
33:35
These are conflict areas,
33:37
this is an area that we need to look at protecting.
33:39
Then launch A.I.
33:41
[man 4] We are going to release the speed limit on your car.
33:46
Tim, can you hear me?
33:47
[man 5] With A.I.,
33:49
ideas are easy, execution is hard.
33:52
[Domingos] What excites me the most about where we might be going
33:55
is having more super-powers...
33:57
[firefighter] I got him!
33:58
[Domingos] ...and A.I. is super-powers for our mind.
34:00
[man 6] Even though the limb is synthetic materials,
34:03
it moves as if it's flesh and bone.
34:05
[woman 1] You start to think about a world
34:07
where you can prevent disease before it happens.
34:09
[man 7] A.I. can give us that answer
34:11
that we've been seeking all along...
34:13
"Are we alone?"
34:14
Bah!
34:15
[man 8] I love the idea that there are passionate people
34:17
dedicating their time and energy
34:19
to making these things happen.