miércoles, 29 de abril de 2015

Google I/O 2014 - Biologically inspired models of intelligence



For decades Ray Kurzweil has explored how artificial intelligence can enrich and expand human capabilities. In his latest book, How To Create A Mind, he takes this exploration to the next step: reverse-engineering the brain to understand precisely how it works, then applying that knowledge to create intelligent machines. In the near term, Ray's project at Google is developing artificial intelligence based on biologically inspired models of the neocortex to enhance functions such as search, answering questions, interacting with the user, and language translation. The goal is to understand natural language to communicate with the user as well as to understand the meaning of web documents and books. In the long term, Ray believes it is only by extending our minds with our intelligent technology that we can overcome humanity's grand challenges.

Watch all Google I/O 2014 videos at: g.co/io14videos


0:00
0:06RJ MICHAEL: Hello everyone I'm RJ Michael.
0:09I'm Director of Games for Google.
0:14And is my great pleasure to introduce you today one Ray
0:17Kurzweil.
0:19Ray Kurzweil is one of the world's leading inventors,
0:23thinkers, and futurists, with a 30-year track
0:28record of accurate predictions.
0:31He's called many things by a number of sources.
0:34He's called "the restless genius" by the "Wall Street
0:37Journal," "the ultimate thinking machine" by "Forbes Magazine."
0:42PBS selected him as one of their 16 revolutionaries
0:45who made America.
0:48Kurzweil was selected as one of the top entrepreneurs
0:50by "Inc. Magazine," who described him
0:53as "the rightful heir to Thomas Edison."
0:59Kurzweil was a principal inventor
1:01of the first CCD flatbed scanner, the first omnifont,
1:06optical character recognition system.
1:09He did the first omnifont OCR.
1:13He did the first printed speech reading machine
1:15for the blind, the first text-to-speech synthesizer,
1:18the first music synthesizer capable of recreating
1:21the grand piano and other orchestral instruments,
1:25and the first commercially marketed large vocabulary
1:29speech recognition system.
1:31This is the stuff that the guy's done with his life,
1:33with his career so far.
1:34Among his many honors, he's the recipient
1:36of the National Medal of Technology.
1:39He was inducted into the National Inventors
1:41Hall of Fame.
1:42He holds 20 honorary doctorates, and he has received honors
1:46from three US Presidents.
1:50Ray has written five national bestselling books, including
1:53the New York Times best seller, "The Singularity is Near,"
1:58in 2005, and more recently "How to Create a Mind" in 2012.
2:04He is Director of Engineering at Google.
2:06He is now heading up a team developing machine intelligence
2:11and natural language understanding.
2:14He has also been a personal hero of mine since I was young.
2:19And he has this drive to make AI happen at Google.
2:25And I couldn't be more delighted to announce to you guys
2:28that, as of today, I am now going
2:30to start exploring the entertainment and education
2:34space directly with Ray Kurzweil in his new organization.
2:38And who knows where this is going to go,
2:40but it's going to be awesome.
2:42Ladies and gentlemen, may I introduce Ray Kurzweil.
2:45[APPLAUSE]
2:49
2:59RAY KURZWEIL: Thank you, and thanks, RJ.
3:01And now that you're going to be working with me,
3:04we have to work on your enthusiasm a little bit.
3:08And it's great to be at Google.
3:10It's actually my first job.
3:11I'll talk a little bit more about what I'm doing,
3:14but I'd like to share with you some ideas about thinking.
3:19I've been thinking about thinking for 50 years.
3:21First, I would like to say a few words
3:23about the law of accelerating returns.
3:25I won't say too much about it, because many of you
3:28have heard me talk about it before,
3:30but the law of accelerating returns is alive and well.
3:34It's not just Moore's Law.
3:36I keep hearing people say, well, Moore's Law
3:38is coming to an end, as if that were
3:40synonymous with the exponential growth of information
3:43technology.
3:44Moore's law was just one paradigm among many.
3:47When I first studied this in 1981,
3:49Moore's Law had only been underway for a few years, which
3:52is shrinking components on an integrated circuit.
3:551950s, there were shrinking vacuum tubes
3:57to keep this exponential growth going.
3:59
4:02Gordon Moore originally said Moore's Law
4:04would come to an end in 2002.
4:06Justin Rattner, the CTO of Intel, now says 2022.
4:09But he'll also show you, in their labs,
4:12the sixth paradigm-- three-dimensional,
4:14self-organizing molecular circuits
4:16working, which will keep the law of accelerating returns going
4:20well into late of this century.
4:23So what is the law of accelerating returns?
4:25It's not that everything grows exponentially, or even
4:27every aspect of information technology.
4:30We're speeding up transistors, making
4:32them smaller so the electrons have less distance to travel,
4:35so transistors were faster.
4:37That was a sub-paradigm.
4:39I always felt we were going too fast.
4:41The human brain is actually very slow,
4:43computes about 100 calculations per second
4:47in the interneuronal connections.
4:49But it's massively parallel.
4:51So there's actually a 100 trillion-fold parallelism.
4:54There's 100 billion neurons with about 10,000 connections each,
4:59and that's where the computations take place.
5:01So it's a very massively parallel, but slow.
5:03So it uses very little energy, and we
5:06needed to move more in that direction,
5:08towards more parallelism and less speed.
5:10So speeding up transistors was just one sub-paradigm.
5:15And we already have taken steps in the third dimension.
5:19Many memory circuits are already three-dimensional.
5:22The law of accelerating returns is not
5:25limited at all to Moore's Law.
5:27Moore's Law is one paradigm among many in computation.
5:30Computation is one example of many
5:33of the law of accelerating returns.
5:35So what exactly does it pertain to?
5:37It pertains to the price performance and capacity
5:41of information technology, and every form
5:45of information technology.
5:46So in computation, calculations per second per constant dollar
5:50has been speeding up at an exponential pace since the 1890
5:53census.
5:54But it's also true of communications,
5:56biological technologies, brain scanning, modeling the brain,
6:02being able to reprogram biological data,
6:05printing-- the spatial resolution
6:08of three-dimensional printing, which is turning information
6:11into physical products-- these are
6:12all examples of either price performance or capacity
6:16of information technology.
6:18And I'll show you this first example.
6:21
6:25I mean, this is the graph I had in 1981 of computation.
6:28It's a logarithmic scale.
6:29Every label level is 100,000 times greater
6:31than the level below it.
6:33And so this represents trillions-fold increase
6:36in calculations per second for constant dollar.
6:39Moore's Law is the fifth paradigm.
6:42But notice how smooth a trajectory that is.
6:46It really has a mind of its own.
6:47It goes through thick and thin through one piece.
6:49And exponential growth-- the second point I want to make--
6:52is not intuitive.
6:54If you ever wonder, gee, why do I have a brain?
6:56It's really to make predictions about the future
6:58so we can anticipate the consequences of our action,
7:01and the consequences of inaction.
7:03But those built-in predictors are linear.
7:06When we were walking through the fields 1,000 years ago,
7:08we would make a prediction-- OK, that animal is going that way.
7:11I don't want to meet him.
7:12I'm going to go a different route.
7:13That was good for survival.
7:14That became hardwired in our brains,
7:16but those predictors are linear.
7:18And people still use their linear intuition
7:22about the future.
7:23That's the principal difference between myself and my critics.
7:26We're both looking at the same reality.
7:28We have similar judgments about it.
7:30And if I thought progress was linear,
7:32I'd be pessimistic also.
7:35And many things are linear.
7:37Biology-- health and medicine was linear.
7:39That was still useful.
7:41The life expectancy was 19 1,000 years ago.
7:43We've quadrupled it.
7:45I talked recently to some junior high school students
7:47and pointed out they would all be senior citizens
7:49if it hadn't been for that progress.
7:51Life expectancy was 37 200 years ago.
7:55Schubert and Mozart died in their 30s
7:56of bacterial infections, and that was typical.
7:59That's all been from linear progress.
8:01Exponential progress is quite different.
8:04In a linear progression, that's our intuition about the future
8:07goes one, two, three.
8:09And exponential progression-- that's
8:10the reality, not of everything, but of price performance
8:14capacity, of information technology,
8:16goes one, two, four.
8:18It doesn't sound that different, except by the time
8:20you get to 30, the linear progression-- our intuition's
8:23at 30.
8:24The exponential progression is at a billion.
8:29And that's not an idle speculation.
8:31I mean, this is several billion times
8:32more powerful per dollar than the computer
8:35I used when I was an undergraduate at MIT.
8:37It's several trillion-fold since we
8:40started with the 1890 American census.
8:43But again, look at how smooth a regression that is.
8:45Nothing has any impact on it.
8:48The third point I want to make is
8:49that it's not just computation.
8:51It really affects every form of information technology.
8:54And information technology is ultimately
8:56going to transform everything we care about,
8:59as a result of application developers like yourself.
9:03I mean that's what drives it forward,
9:04and I'll say more about that.
9:07We could buy one transistor for $1 in 1968.
9:10I was pretty excited about that because I
9:12was used to spending $50 for a telephone relay
9:14that could do that.
9:14We can now buy 10 billion.
9:17Cost of a transistor cycle has come down by half every year.
9:20That's a 50% deflation rate, so I
9:24can get the same computation, or communication,
9:27or biological technologies as I could
9:29a year ago for half the price.
9:31Economists actually worry about that because we
9:33had massive deflation during the Great
9:35Depression-- a different reason.
9:37There was a collapse of consumer confidence.
9:39But the concern is, if I can get the same stuff--
9:41and I'll talk about three-dimensional printing
9:43in a moment to include physical stuff-- for half the price,
9:48I'll buy more.
9:49I mean, that's economics 101, but I'm actually
9:51going to double my consumption.
9:53And if I don't, let's say I increase my consumption
9:56in terms of bits and bytes 50%, which
9:58is a lot-- the size of the economy, not as measured
10:02in bits, bytes, and basepeds, but as measured in currency--
10:05will shrink for a variety of good reasons.
10:08That would be a bad thing.
10:10But that's not what happens.
10:12I've got 50 different consumption charts like this.
10:14This is bits of memory consumed.
10:17We actually more than double it every year.
10:19There's been 18% growth in constant currency each year
10:23for the last 50 years in every form of information technology,
10:26despite the fact that you can get twice as much of it
10:29each year for the same price.
10:31And the reason for that is application developers
10:35like all of you.
10:37I mean, why weren't there social networks eight years ago?
10:40Was it because Mark Zuckerberg was still
10:42in junior high school?
10:43No, there were attempts to do it,
10:45and there are arguments-- can we afford
10:47to allow users to download a picture?
10:49The price performance wasn't there.
10:51Why weren't there search engines more than 15 years ago?
10:55I wrote in the early '80s, when the ARPANET connected
10:58a few thousand scientists, that this was growing exponentially,
11:01would necessarily do that.
11:02This would therefore be a World Wide Web--
11:04I didn't use that term-- connecting hundreds of millions
11:07of people to each other, and to vast knowledge resources
11:10by the late '90s.
11:11People thought that was nuts when
11:13the entire American defense budget could tie together
11:15a few thousand scientists, but that's
11:18the power of exponential growth.
11:19That's what happened.
11:21And I wrote that there would be so much information
11:23you couldn't find anything without search engines,
11:25and the computational communication resources
11:28needed for a search engine would come into place.
11:31What you could not predict is that it
11:33would be these couple of kids in a Stanford dorm near here,
11:36who would take over the world of search among the 50 projects
11:39that were seeking to do that.
11:40I'm not saying everything is predictable,
11:42but you could predict that search engines would
11:45be needed and feasible in the late '90s.
11:47And Google was founded in '98.
11:50So "Time Magazine" wanted a particular computer
11:54they'd covered as the last point on this cover story
11:58about the law of accelerating returns.
11:59It's right there.
12:00And this was a curve I'd laid out 30 years earlier.
12:03So this is, in fact, very predictable.
12:06And in terms of all my predictions,
12:08when they're like this, in terms of actual numbers-- price
12:12performance or capacity of different information
12:14technologies-- they're really right on the money.
12:16If you Google how my predictions are faring,
12:19you'll get 150-page essay looking at all the predictions
12:25that I've made as of that time, which was a couple of years
12:27ago, including 147 that I made in the age
12:30of spiritual machines, which I wrote in the late '90s,
12:33came out in '99, about 2009-- 78% were exactly
12:38correct to the year.
12:40And these were predictions that were by decade.
12:45Another 8% were off one year, so I
12:47called them essentially correct.
12:48So 86% were correct or essentially correct.
12:52The ones that were wrong, included things
12:56like regulatory and cultural issues.
13:00Like, for example, one that was wrong
13:01is that we'd have self-driving cars, which, in fact, began
13:06a glimmer of working in 2009.
13:08If I had said 2014, it would have been correct,
13:12because Google self-driving cars have already gone 8,000 miles.
13:16And Google's going to launch a fleet of experimental vehicles
13:20in Mountain View this year.
13:21But it wasn't correct for 2009.
13:26The number of bits we move around wirelessly
13:28in the world over the last century--
13:31there was Morse code, over AM radio a century ago,
13:35today's 4G networks-- trillions-fold increase.
13:38But again, look at how smooth of a trajectory is.
13:42Internet data traffic doubling every year.
13:44Here's that graph I had of the ARPANET in the early '80s,
13:48and projected that out.
13:50The graph on the right is the same data,
13:52but on a linear scale.
13:54And that's how we experience it.
13:56We don't experience it in the logarithmic domain.
13:58So to the casual observer, it looked
14:00like, whoa, World Wide Web, new thing, came out of nowhere.
14:03But you could see coming.
14:05And biology, this has been a perfect exponential.
14:10The Genome Project was announced in 1990,
14:12was not a mainstream project.
14:14Halfway through the project, critics
14:16blasted it, saying, look, here it's halfway
14:19through this 15-year project.
14:20You've only collected 1% of the genome.
14:22So seven years, 1%, it's going to take 700 years,
14:25just like we said.
14:27My reaction is, we're at 1%.
14:29We're almost done, because it's an exponential progression.
14:34[LAUGHTER]
14:35And indeed, it was done seven years later,
14:37because 1% is seven doublings from 100%.
14:40That's continued since the end of the Genome Project.
14:42That first genome cost a billion dollars.
14:43We're now down to a few thousand dollars.
14:45But it's not just sequencing.
14:47We can now reprogram this outdated data.
14:50And there are many examples of that.
14:53The Johnson Diabetes Center, they've
14:54turned off the fat insulin receptor gene.
14:56We have technologies like RNA interference
14:58that can turn genes off.
14:59These animals ate ravenously and remained slim,
15:02and lived 20% longer.
15:04I've worked with a company that adds a gene to patients
15:07with a terminal disease called pulmonary hypertension, caused
15:10by one missing gene.
15:11They scrape out these cells non-invasively from the throat,
15:14add the gene in vitro, inspect it got done correctly,
15:17replicate it several million fold-- another new technology.
15:21So they now have millions of cells with that patient's DNA,
15:23but with the gene they're missing.
15:25Inject it back in the body.
15:27The body recognizes them as lung cells,
15:29and this is cured, this terminal disease in human patients.
15:34There's an interesting case combining
15:36a number of different exponentially growing
15:38information technologies, with this young girl who
15:41had a damaged windpipe.
15:42She was not going to be able to survive with it.
15:45So they scanned her throat with noninvasive imaging-- spatial
15:48resolution of noninvasive imaging
15:50is doubling every year-- that's an important technology
15:53for reverse engineering the brain-- designed
15:56her new windpipe in the computer using computer design,
16:00printed it out with a 3D printer using biodegradable materials,
16:04then populated this scaffolding with her stem cells,
16:08using the same 3D printer, grew out a new windpipe for her
16:12and installed it surgically.
16:13And it worked fine.
16:16You can now fix a broken heart-- not yet from romance.
16:20That'll take us a few more decades.
16:22[LAUGHTER]
16:23But half of all heart attack survivors have a damaged heart.
16:26It's called low ejection fraction.
16:28My father had that in the '60s, and could hardly walk.
16:31Now you can fix that by reprogramming adult stem cells,
16:34and rejuvenating the heart.
16:36You have to be a medical tourist, because it's not
16:38yet approved here, although it will be soon.
16:41There are many other examples.
16:42We could talk all day about this,
16:45but our ability to reprogram this outdated software
16:48is growing exponentially.
16:50These technologies are now 1,000 times more powerful
16:52than they were a decade ago, when
16:54the Genome Project was completed.
16:56They'll be another 1,000 times more powerful in a decade,
16:58a million times more powerful in 20 years.
17:01That's the implication of a doubling in power every year.
17:04Somewhere between that 10 and 20 year mark,
17:06we'll see significant differences in life
17:09expectancy-- not just infant life expectancy,
17:11but your remaining life expectancy.
17:14The models that are used by life insurance companies
17:17sort of continue the linear progress
17:19we've made before health and medicine
17:22was in information technology, which
17:23is based on, basically, accidental findings.
17:27This is going to go into high gear.
17:31Life expectancy is a statistical phenomenon.
17:34You could still be hit by the proverbial bus tomorrow.
17:37Of course, we're working on that here at Google,
17:39also, with self-driving cars.
17:41
17:43And three-dimensional printing-- I
17:45think we're in the hype phase of this.
17:49I've written about the life cycle of technologies.
17:52Usually early enthusiasts who see the vision,
17:55but haven't really calculated the timing correctly--
17:59and exponential growth ultimately
18:01becomes transformative, but it actually
18:02starts out very slowly.
18:04You're doubling tiny, little numbers,
18:05like 1/10,000 of the genome in 1990, 2/10,000 in 1991.
18:11So the progress doesn't come on schedule.
18:13Disillusionment lets in, and then you have basically a bust.
18:19And then it comes back, as we really,
18:21truly understand the true markers
18:23of what it will take to be successful.
18:27This looks like a young audience,
18:28but you may remember the internet boom in the 1990s.
18:32If you had the URL dog.com, you were a billionaire.
18:36Then around the year 2000, people
18:38realized you can't really make money
18:40with these internet companies.
18:41And there was the internet bust, which
18:43almost took down the economy.
18:45But then it came back, and today you
18:46have internet companies like Google, and Apple, and Facebook
18:51that are worth hundreds of billions of dollars.
18:55And we're kind of in that hype phase now.
18:58I think we're still five or six years away from the parameters
19:03we need to make this successful.
19:04We need sub-micron resolutions.
19:07The resolution accuracy is improving
19:09at a rate of about 100 in 3D volume per decade,
19:12so it's exponential progress.
19:14But right now, it's multi-micron.
19:16We can do interesting things.
19:18We had an opening a year ago at Singularity University
19:22where the band-- all the instruments that the band was
19:25playing were printed out on three-dimensional printers.
19:27So there are interesting niche applications.
19:30But by 2020, we'll be able to print out clothing,
19:32for example.
19:33So people will go-- great, there goes the fashion industry.
19:36But not so fast.
19:38I mean, look at other industries that have already
19:42undergone the transformation from physical products
19:44to digital products.
19:46A few years ago, if I wanted to send you a book, or a movie,
19:48or a music album, I'd send you a FedEx package.
19:51Today, I can send you an email attachment.
19:54And there is indeed an open source market
19:58with millions of free songs, videos, movies, books,
20:02documents that you can download legally for free.
20:05And you can have a very good time with these free media
20:08products.
20:09But people still spend money to read Harry Potter,
20:11or go to the latest blockbuster, or get music
20:14from their favorite artists.
20:15And you have the coexistence of the open source market, which
20:18is a great leveler, bringing high-quality products
20:21to everyone at almost no cost, or no cost,
20:24and a proprietary market.
20:26That'll be the nature of the economy going forward.
20:28So in the 2020s, you'll be able to download cool fashion
20:32designs, print them out at pennies per pound,
20:34which is what it costs for three-dimensional printing.
20:37But people will still spend money for the latest
20:41hot designs from their favorite designer.
20:42
20:45And I mentioned I've been thinking
20:47about thinking for a long time.
20:4850 years ago, I wrote a paper when
20:51I was 14, how I thought the brain worked.
20:54There was actually very little to go on.
20:56But I described it as a series of modules,
20:58and each module could recognize a pattern.
21:00And the essence of the human brain was pattern recognition.
21:03We actually weren't very good at logical thinking.
21:06Even then, I could see that chess computers were based
21:09on logic, and being able to look ahead, and calculate
21:12all the kind of move sequences.
21:15The human brain did it by deep forms of pattern recognition.
21:19In '97, Kasparov was asked Deep Blue analyzes 200,000 board
21:24positions a second.
21:25How many do you analyze?
21:27And he said, maybe less than one.
21:29So how is it that he was able to hold up at all?
21:31It's his deep powers of pattern recognition.
21:34And I described it as a series of modules, each of which
21:37could learn a pattern, remember a pattern, implement a pattern,
21:40discover a pattern.
21:42And that these were organized in hierarchies,
21:44and we created that hierarchy with our own thinking.
21:47And that the whole neocortex worked the same way.
21:50And that was actually contrary to the common wisdom
21:53of the time, because it was noticed
21:55that these different regions, and they
21:57do very different things.
21:58So it was thought they must be organized very differently.
22:01V1 in the back of the head recognizes visual images,
22:04because that's where the optic nerve spills into,
22:07and it can recognize the fact that the edge of this table
22:09is flat, that there's a horizontal crossbar
22:13in that capital A, and so on.
22:16The cruciform gyrus up here recognizes faces.
22:19We know that because if you conk somebody
22:21over the head in that region, and knock it out,
22:23people can't recognize faces.
22:25The frontal cortex is famous for language, and art, and science.
22:29They do very different things.
22:31They must be using different algorithms.
22:34There was one neuroscientist who actually did
22:36autopsies of the neocortex in all of these different regions,
22:40and found they looked exactly the same--
22:42the same neurons, the same interconnection patterns.
22:44He said neocortex is neocortex-- Vernon Mountcastle.
22:49And so I use that.
22:51And I also use observations of human brains
22:55in action, which is still our best laboratory,
22:57and described this basic method.
23:00This actually describes the same algorithm,
23:05and actually describes how each of these modules,
23:08which I count now at 300 million,
23:10can recognize a pattern, learn a pattern.
23:12It's basically functionally similar
23:16to a hierarchical hidden Markov model, which
23:19is a technology I worked on the 1990s, and speech recognition,
23:23and early forms of natural language understanding.
23:26And it's a little bit different than neural nets,
23:29because neural nets-- the basic element is a neuron, which
23:32can kind of learn one state, not really a whole pattern.
23:37And in my view, the basic module is a pattern-recognition module
23:42that can learn a more complex pattern.
23:44And a hierarchical hidden Markov model
23:46is a hierarchy of Markov models, each of which
23:49can learn a fairly complicated pattern.
23:52I believe that is how the human brain works.
23:55And that's what I'm doing here at Google.
23:58I've given an early version of this book
24:00:00to Larry Page a couple years ago.
24:03:00He liked it.
24:04:00I met with him to ask him for an investment in the company I was
24:07:00going to start to develop these ideas.
24:10:00And he said, have you thought of doing this here at Google?
24:14:00We have these terrific resources in terms
24:18:00of data, and computation, and talent.
24:20:00He actually said it in a very low key way,
24:22:00but that was his message.
24:25:00So I met with him-- Alan Eustace,
24:30:00who's here in the audience.
24:32:00They said I'd have the kind of independence
24:34:00I'd have with my own company, but these Google resources--
24:37:00and so I've been doing that now for a year and a half.
24:42:00And it's been terrific.
24:43:00It's really the only place I could do this project, which
24:46:00has been a 50-year endeavor.
24:49:00And the spatial resolution of brain scanning, as I mentioned,
24:52:00is doubling every year.
24:53:00We can now see inside a living brain,
24:54:00see it create your thoughts, see your thoughts
24:56:00create your brain.
24:58:00And there's a few significant milestones
25:01:00in the history of the biological version of this thinking.
25:04:00You see it up there-- the neocortex.
25:06:00The neocortex emerged 200 million years ago with mammals.
25:10:00It was capable of a different type of thinking,
25:13:00basically hierarchical thinking.
25:15:00Other animals could learn things,
25:17:00but not with elaborate hierarchies.
25:19:00They could have a behavior that might
25:21:00have a hierarchical aspect to it, but it was fixed.
25:24:00They couldn't learn a new hierarchy--
25:26:00at least not in one lifetime.
25:27:00Maybe over 10,000 lifetimes, they
25:29:00could evolve a new behavior.
25:32:00Next significant thing that happened-- 65 million years
25:34:00ago, there was a violent change in the environment.
25:37:00We call it the Cretaceous Extinction Event.
25:39:00That's when the dinosaurs went extinct.
25:40:00That's when 75% of all the animal and plant species
25:44:00went extinct.
25:46:00And that's when mammals overtook their ecological niche,
25:49:00because they could adapt their behavior quickly enough
25:51:00to cope with it.
25:53:00Next significant milestone was 2 million years ago.
25:56:00We developed these large foreheads,
25:58:00so we now had more neocortex.
26:00:00That neocortex has all these folds and convolutions
26:03:00basically to increase its surface area.
26:05:00It's a flat structure.
26:06:00It's about the size of a table napkin and just as thin.
26:09:00It's one module thick, but it has so many convolutions
26:12:00and ridges, it's 80% of your brain.
26:15:00And the frontal cortex-- it's often been thought
26:18:00it must be qualitatively different,
26:20:00because it does these amazing things, like art and poetry.
26:25:00But recent research projects discovered, or examined,
26:31:00the issue, what happens to V1, which I mentioned
26:34:00is in the back of the head, and does these very simple things,
26:37:00like the fact that that's a straight line-- what
26:40:00happens to it in a congenitally blind person?
26:43:00They're not getting any visual images.
26:46:00Well, the frontal cortex notices hey, V1 isn't doing anything,
26:49:00and it actually harnesses it to help it with language,
26:51:00and art, and science-- at the opposite extreme end
26:55:00of the continuum of complexity of features,
26:58:00showing that they're both basically using
27:00:00the same algorithm.
27:03:00And so we are already doing a very good job as primates
27:07:00without the frontal cortex.
27:08:00Now we had this additional quantity,
27:11:00and so we could think higher thoughts.
27:13:00Because the neocortex is built on conceptual levels.
27:16:00Each level is more abstract than the one below it.
27:19:00So the first thing we invented was
27:21:00language, and that was about a couple hundred thousand years
27:24:00ago.
27:25:00So I have an idea in my head.
27:27:00It's a hierarchy of other ideas, and symbols,
27:31:00and other structures.
27:34:00And I want to actually communicate that and basically
27:37:00transmit that hierarchical structure to your neocortex.
27:41:00So we invented language in order to do that.
27:44:00And it's a communication medium that is hierarchical,
27:48:00so it could reflect the hierarchical structures
27:50:00in the neocortex.
27:51:00The neocortex was successful because the world
27:54:00is hierarchical.
27:55:00That's the best way to understand it.
27:56:00Trees have limbs.
27:58:00Limbs have branches, branches have other branches.
28:00:00Some branches have leaves.
28:01:00The world is organized in a hierarchical fashion.
28:04:00And we could now represent this in language.
28:07:00And if you ever want to see some entertaining examples
28:11:00of the hierarchy in language, read a Gabriel Garcia Marquez
28:19:00novel.
28:20:00He has one sentence that's six pages long,
28:22:00and it's grammatically correct.
28:24:00And it has a fantastic array of hierarchical structures showing
28:28:00the indefinite hierarchy we can create with language,
28:31:00reflecting the indefinite hierarchy we
28:33:00can have in our ideas.
28:36:00And there's then been a continual acceleration
28:40:00of technology.
28:40:00Written language only took a few thousand years.
28:42:00The first examples were thousands of years ago.
28:45:00The printing press took 400 years to reach a mass audience.
28:48:00The telephone reached a quarter of the American and European
28:51:00population in 50 years.
28:52:00The cellphone took seven years.
28:54:00Social networks, wikis, and blogs took three years.
28:57:00We continually are accelerating, basically,
29:00:00these information technologies because
29:02:00of the law of accelerating returns.
29:05:00And we can now simulate aspects of the neocortex.
29:09:00And fundamentally what my team-- and we're not
29:12:00the only team doing this-- is trying
29:14:00to create a functional simulation of the neocortex.
29:17:00And I'll tell you the key problem
29:19:00is we can create hierarchies.
29:21:00So even in the 1990s, we had a hierarchy of acoustic states,
29:25:00and then phonemes, and then word models
29:28:00for speech recognition, and then simple grammatical models,
29:31:00so that we could have a sentence like, Move this paragraph to
29:35:00after the third paragraph in the next page.
29:36:00And it would carry out that simple command.
29:40:00But we couldn't actually add a new layer ourselves.
29:43:00That's actually, if you want to speak technically,
29:46:00the key technical challenge in trying to create more and more
29:50:00flexible AI, is how do we create the next conceptual level
29:55:00that's more abstract than the ones we have, automatically
29:58:00from the data, rather than reprogramming it ourselves
30:02:00using our human intelligence?
30:03:00
30:11:00I'll skip some of this to-- at the very high level,
30:16:00you have very abstract ideas, like that's funny,
30:20:00that's ironic, she's pretty.
30:22:00This 16-year-old girl is having brain surgery.
30:25:00And whenever they stimulated these points showed in red,
30:27:00she would laugh.
30:29:00They wanted to be able to talk to her.
30:30:00There's no pain receptors in the brain,
30:32:00so you can do that during brain surgery.
30:34:00And they thought they were stimulating some kind of laugh
30:36:00reflex, but they quickly realized that no, they
30:38:00were triggering the genuine perception of humor.
30:42:00She just found everything hilarious
30:44:00whenever they simulated these points.
30:45:00You guys are so funny just standing around,
30:47:00was a typical comment.
30:50:00And they weren't funny, so--
30:52:00[LAUGHTER]
30:54:00So an example from another company
30:56:00that shows our beginning ability to actually understand
31:00:00human language is WATSON.
31:02:00As you can see, WATSON got a higher score
31:04:00than the best two human players in Jeopardy combined.
31:07:00It got this query correct in the rhyme category,
31:09:00"A long, tiresome speech delivered
31:12:00by a frothy pie topping," and WATSON quickly
31:14:00said, "What is a meringue harangue?"
31:17:00And WATSON got its knowledge by reading 200 million documents
31:20:00of natural language, including all
31:23:00of Wikipedia and other encyclopedias.
31:25:00It doesn't understand each page as well as you or I,
31:27:00but it makes up for that by reading a lot of pages.
31:31:00And that's the kind of thing we're trying to do here.
31:34:00We have a model that I believe actually
31:37:00will solve this key problem of being
31:41:00able to add to the hierarchy automatically,
31:44:00so that we can handle, ultimately, complex documents.
31:47:00So one application, for example, would
31:49:00be in language translation.
31:50:00Right now, it does a very good job through the power of data,
31:52:00and these Rosetta Stone databases of translated text.
31:56:00By matching word sequences, we're
31:58:00improving the way that we match them.
32:00:00But we really would like to do it
32:01:00the way a human does it, which is to understand it.
32:04:00What does it mean to understand?
32:05:00It means to take the language, and actually
32:06:00create this hierarchical structure
32:08:00of the ideas in my head, and then resynthesize it,
32:11:00re-articulate it in the new language.
32:13:00That's the kind of thing we hope to do.
32:14:00We'd like the search engine to read for meeting.
32:18:00So if you put out a tweet, "Everything Ray Kurzweil
32:21:00is saying at I/O is nonsense," there's
32:25:00actually, in that simple text, a hierarchy,
32:28:00which you need to understand to really understand
32:30:00what that's trying to say.
32:32:00If you write a blog post, you have something to say,
32:34:00and the search already goes substantially
32:36:00beyond the base forms.
32:39:00It will understand the syntactic structure.
32:42:00If you see the word "he," it'll do
32:43:00that co-reference resolution.
32:45:00It'll understand synonyms.
32:47:00But it's not fully modeling the ideas
32:49:00that you have to say when you write an article or a blog
32:53:00post.
32:54:00And that's what we would like to actually understand.
32:57:00And then you would be able to dialogue
32:59:00with your search engine to give it complex tasks,
33:02:00and interact with it the way you would with a human assistant.
33:06:00And it would then go out.
33:07:00And maybe it wouldn't even find the information that day,
33:10:00but a week later, would pop up and say,
33:12:00you asked me this question a week ago,
33:14:00and new research just came out that answers that question,
33:18:00and so on.
33:19:00
33:24:00So let's stop here, and I notice that time clock isn't working,
33:29:00so I have no idea where we are.
33:30:00But RJ, maybe you've got some questions.
33:36:00RJ MICHAEL: Yes, indeed.
33:38:00So I've gathered up some questions
33:40:00from people, fellow Googlers, from some of our friends
33:44:00throughout the community.
33:46:00And we're going to take this opportunity
33:49:00to ask some of these questions, starting
33:52:00with a quote from William Gibson,
33:55:00the famed author who said that, the future is here.
33:58:00It's just unevenly distributed.
34:01:00Where do you think things are running fast,
34:03:00and where they lagging further behind
34:05:00than what you would have expected?
34:08:00RAY KURZWEIL: Well, I think it's actually
34:10:00very widely distributed.
34:12:00Companies like Google-- and not just
34:13:00Google-- Apple, Microsoft, Facebook--
34:16:00are not just using these technologies
34:19:00with a few corporations and government agencies.
34:21:00It's in billions of hands.
34:24:00Google search is used by between 1 and 2 billion people,
34:28:00and we hope to expand that to the next couple billion users
34:32:00and a couple billion after that.
34:34:00That's the business model, and I believe
34:38:00that is actually how we will use these technologies.
34:40:00They'll be very widely distributed.
34:42:00And they're very democratizing.
34:45:00The technologies that move very smoothly
34:47:00are the sort of pure application of the law of accelerating
34:49:00returns.
34:50:00When you get into regulatory issues,
34:52:00like we have with the self-driving cars, maybe
34:56:00a little less predictable.
34:57:00There's a lot of regulation in medicine.
35:00:00But I believe these technologies ultimately
35:03:00will be so profoundly superior, that they will actually
35:07:00accelerate these regulatory processes as well.
35:10:00
35:12:00RJ MICHAEL: So you outlined in your book, "How
35:15:00to Create a Mind," the idea of what it's
35:19:00going to take to actually create a mind.
35:20:00And you've chosen to pursue these ideas here at Google.
35:25:00Why Google?
35:28:00RAY KURZWEIL: Well, it's actually
35:29:00the first time I've done that.
35:32:00But I've realized that you need unique resources to do this.
35:37:00It's a very difficult problem.
35:39:00So for one thing you need a tremendous amount of talent.
35:42:00That's, I think, the primary resource that's unique--
35:48:00maybe not completely unique, but it's certainly
35:50:00evident at Google.
35:52:00And then you want to run something
35:54:00on a million computers, and you want tremendous data
35:57:00that reflects language.
35:59:00And we have tens of billions of pages--
36:01:00virtually all books and web pages.
36:05:00And so this is not a project I could do with my own company,
36:08:00even if I raised all the money that I could hope for.
36:13:00And it's a bold company that takes on major challenges,
36:18:00and tries to improve the world with these applications,
36:21:00and make them widely available.
36:24:00So I like the philosophy of the leadership.
36:28:00RJ MICHAEL: Me too.
36:30:00It's true.
36:33:00But I'm an engineer at heart, and the engineer in me
36:36:00wants to know how you intend to build this thing.
36:39:00Could you describe to us what the engineered mind is like?
36:44:00What tech must we implement versus
36:46:00what behavior do we expect to emerge?
36:49:00What must be done in software?
36:52:00What must be done in hardware?
36:54:00RAY KURZWEIL: Well, on the hardware requirements,
36:56:00I mean, to functionally emulate the human brain--
36:58:00I've analyzed that.
37:00:00I've estimated it about 10 to the 14th calculations
37:03:00per second and singularity is near.
37:06:00So I hedged that a bit and said 10 to the 14th to 10
37:08:00to the 16th.
37:10:00I've reanalyzed it using different methods in "How
37:12:00to Create a mind" that come up again with 10 to the 14th.
37:15:00There've been a number of independent analyses of that.
37:18:00They come with the same figure.
37:20:00We've already surpassed that by three orders of magnitude
37:24:00in supercomputers.
37:27:00It'd be hard to provide 10 to the 14th calculations
37:30:00per second to all of a billion users
37:33:00kind of using it more or less at the same time.
37:36:00I've discussed this with Larry Page,
37:38:00and he thinks no, actually that could be possible.
37:40:00But the law of accelerating returns
37:43:00will make that easy by the early 2020.
37:46:00So it really comes down to a software problem.
37:49:00I described, I think, the key software problem.
37:52:00We can already create these hierarchies.
37:54:00In Google, and in other companies,
37:56:00there's debate between several different learning methods,
37:59:00and they have pros and cons.
38:02:00We need one that can actually represent hierarchies
38:05:00of complicated patterns, where each pattern has
38:08:00its position in a complicated hierarchy of patterns.
38:12:00And the key unsolved problem is, how do we then
38:15:00add a conceptually more abstract level to that?
38:19:00And I think we can use machine learning
38:22:00to find the patterns at that high level,
38:25:00but you need to be able to model them correctly.
38:27:00And so that's what we're exploring.
38:29:00And then applying it to language.
38:31:00And ultimately, Google will apply it
38:33:00to other types of input, like videos and pictures.
38:35:00And we're already making a lot of progress.
38:38:00Machine learning at Google is already very powerful.
38:41:00Once we have a system that's working,
38:43:00there will be little loops that are very tight that
38:46:00are taking up the bulk of the computation
38:48:00that we could put in an ASIC, in a dedicated
38:51:00circuit, because you can get 1,000 fold increase in price
38:55:00performance by hard-wiring repetitive algorithms
38:59:00in hardware.
39:00:00But and there are attempts, of course, to do that.
39:03:00I think it's premature now, because we haven't really
39:05:00settled on the right type of machine learning.
39:08:00So we don't really know what algorithm to speed up.
39:10:00But that'll be a straightforward engineering trade-off,
39:13:00once we can actually mess with the software.
39:16:00So it's a fundamentally a software problem.
39:19:00RJ MICHAEL: So most of the people in this room
39:21:00are engineers, or are heavily involved in app development
39:24:00as well.
39:25:00And everything that you're talking about here, these
39:28:00are exciting visions to a lot of people here,
39:30:00and myself as well.
39:32:00But what can the developers in this room
39:35:00do to turn these ideas of yours into actual working
39:38:00products and systems?
39:40:00What role do these developers play in pushing all of this
39:43:00forward for us?
39:45:00RAY KURZWEIL: Well it's application developers
39:46:00that drive it forward.
39:48:00And I mean, there's a debate in the AI
39:51:00field between traditional artificial intelligence,
39:54:00and something called AGI, Artificial General
39:56:00Intelligence, which is implicitly a criticism that AI
39:59:00has not pursued general intelligence,
40:02:00and it's gone often to narrow things
40:04:00like OCRs, speech recognition, or robotics.
40:08:00But I actually think we get from here to there-- there
40:11:00being future AI, strong AI-- one step at a time.
40:16:00And the steps are applications, and we
40:18:00need to actually optimize the technology
40:20:00for the applications.
40:21:00It's very hard to develop a technology
40:23:00if you don't have something to optimize it for.
40:26:00So I like the idea of crossing the river
40:29:00kind of from one stone to the next.
40:31:00People say, what about that part of the river
40:33:00where it's too deep, and there are no stones?
40:35:00So I'm not sure the answer to that.
40:38:00But we do get from here to there through one step at a time.
40:42:00And each step is sort of benign.
40:44:00It's exciting in the application world,
40:47:00but it's not the grand step to AI.
40:53:00But that's how we're going to get there,
40:55:00and the application developers push it forward,
40:58:00and make it practical, and provide an economic business
41:01:00model for it.
41:04:00RJ MICHAEL: So this law of accelerating return
41:06:00that you talk about-- you've made
41:09:00it clear in natural space, that it exists like this.
41:14:00And in a technology space, it's definitely true.
41:17:00It all starts to feel just like natural law,
41:20:00like natural progression.
41:21:00Why do we have to work toward it?
41:23:00Why don't we just sit back and let it happen for us?
41:26:00RAY KURZWEIL: Yeah, that question comes up a lot--
41:28:00why don't we just sit back and let it happen?
41:31:00Why are we working so hard?
41:33:00And if we did that, it wouldn't happen.
41:35:00So what is actually predictable is the human passion
41:39:00to make improvement, to use the computers of 2014
41:42:00to create the computers of 2015.
41:44:00We couldn't do that a decade ago.
41:47:00And we're able to improve things in an exponential manner.
41:52:00Things are 1x.
41:53:00We try to make it 2x.
41:54:00If they are 1,000x, we don't seek to make it 1,001x.
41:58:00We try to make it 2,000.
42:00:00And we have the tools to do that.
42:03:00I have a mathematical treatment in "Singularity is Near."
42:05:00The empirical data is the strongest evidence
42:09:00for the law o accelerating returns.
42:10:00But it is driven by application developers and technology
42:15:00developers taking each step with the current state of the art.
42:18:00
42:23:00RJ MICHAEL: And speaking about the curves,
42:25:00and the rising curves-- sorry, I lost my place on my page here.
42:34:00Well, so let's talk about specialized smarts, then.
42:39:00There's a role for specialized smarts,
42:41:00and the neocortex seems to have a very general architecture--
42:45:00repeated architecture, as you were mentioning--
42:47:00but there also seems to be specific modules that
42:50:00have evolved.
42:51:00Do you think that there are specific functions
42:53:00that we're going to need to build for our learning
42:55:00machines?
42:56:00And how do you know when to go really specific,
43:01:00and focus on one particular thing, versus just allowing
43:04:00it to be handled by the general architecture?
43:07:00RAY KURZWEIL: Well, we still have the old brain,
43:09:00like the amygdala puts out an ancient cascade of hormones,
43:13:00to prepare us for a fight or flight.
43:15:00It's no longer able to decide what
43:17:00to be afraid of, so your boss walks in the room,
43:19:00and whether that causes laughter or fear is really
43:22:00up to the neocortex.
43:23:00And the neocortex is a general architecture.
43:26:00There are no specialized regions.
43:28:00There's no music region.
43:31:00But the patterns in music, or even
43:33:00particular types of music-- whether it's
43:36:00Chopin waltzes, or hip-hop-- are specialized types of knowledge.
43:41:00And we have a limited capacity in neocortex,
43:45:00so you can really be a world-class master of one
43:48:00thing.
43:49:00Einstein played the violin, but he was no Jascha Heifetz.
43:53:00Heifetz was interested in physics,
43:55:00but he was no Einstein.
43:58:00We really need to devote the bulk of our neocortex
44:01:00that's not devoted to every-day concerns
44:04:00to one type of knowledge, which has its own type of patterns,
44:09:00and learn the patterns that others have created,
44:11:00and then push it forward.
44:13:00But really, the architecture is pretty much the same.
44:19:00RJ MICHAEL: I find that fascinating.
44:20:00It's the same algorithm repeated.
44:23:00So while it's great for all of us
44:26:00that personal technology has been
44:28:00freeing us up, and empowering us all,
44:31:00giving us a lot more free time, and giving us
44:33:00more capabilities under our fingertips,
44:35:00do you think that we are actually
44:37:00going to make good use of it?
44:39:00The thing that I keep wondering about
44:41:00is are we essentially going to use this free time, and all
44:43:00these extra capabilities that computers give to us, to allow
44:48:00us to watch more television, and consume more sugary snacks?
44:52:00Which is what I'm afraid of.
44:54:00So how are we going to use this technology that you're
44:57:00developing, to ensure that we actually will live better?
45:01:00RAY KURZWEIL: Well, there's always pros and cons.
45:03:00We just happen to be on the right slide here.
45:07:00And this is just one perspective.
45:09:00People quickly lose perspective.
45:11:00We forget what things were like eight years
45:14:00ago, before there were social networks, 15 years ago
45:16:00before there were search engines.
45:18:00Once these things happen, we assume
45:20:00they have always been around.
45:22:00People certainly forget what things were like 200 years ago,
45:24:00when Thomas Hobbes described life
45:26:00as short, brutish, disaster-prone, poverty-filled,
45:29:00disease-filled.
45:30:00Let's take a quick, one-minute trip
45:32:00through the last two centuries.
45:34:00These are countries.
45:34:00The big, red circle is China.
45:36:00It does some interesting things.
45:37:00Keep an eye on that.
45:38:00The x-axis is income per person in today's dollars,
45:42:00so you can understand it.
45:43:00So there were wealthy countries and poor countries,
45:45:00but nobody was very wealthy.
45:46:00Income per person was hundreds of dollars in today's dollars.
45:51:00On the y-axis is life expectancy.
45:53:00In the '20s and '30s-- worldwide average, 37.
45:57:00So this was the beginning of the Industrial Revolution, started
46:00:00in the textile industry in England in 1800.
46:03:00A few countries are making progress.
46:07:00But as you get to the 20th century, the 1900s,
46:10:00you'll see a wind that carries all
46:12:00of these countries towards the upper right-hand corner
46:14:00of the graph.
46:15:00The have, have-not divide does not go away.
46:19:00There's still rich countries and poor countries.
46:21:00But the countries that are worst off at the end of the process
46:24:00are far better off than the countries that
46:26:00were best off at the beginning of the process.
46:28:00And I shouldn't say end of the process,
46:30:00because the process isn't ending.
46:32:00It's going to go into high gear as we
46:33:00get to the more mature phases of the biotechnology
46:36:00and three-dimensional printing revolutions, AI, and so on.
46:41:00RJ MICHAEL: That's just awesome.
46:43:00
46:45:00RAY KURZWEIL: So to be continued.
46:47:00
46:52:00RJ MICHAEL: So, and I think we might have a few minutes left
46:56:00to take some questions from the audience at this point.
47:00:00If--
47:01:00RAY KURZWEIL: Well--
47:02:00RJ MICHAEL: There's two more left to ask,
47:03:00but I'm saving those babies for the end.
47:05:00Unless you have something you wanted to address.
47:08:00RAY KURZWEIL: I don't know how much time we have,
47:10:00because the countdown clock isn't working.
47:11:00RJ MICHAEL: Yeah, the countdown clock stopped.
47:13:00We have two minutes left?
47:14:00Oh, two minutes.
47:15:00RAY KURZWEIL: OK.
47:15:00RJ MICHAEL: All right, well, then, I'm
47:16:00going to stick to my questions, then.
47:18:00Sorry, you guys.
47:20:00Because I've got one that is just
47:22:00want of my favorite interview questions,
47:23:00that have been dying my whole life
47:25:00to ask Ray these questions.
47:26:00So if you would tell us please, what
47:29:00are some of the more humbling experiences
47:33:00you've had researching and developing
47:35:00your concepts over the years?
47:39:00RAY KURZWEIL: Well, I think it's this one unsolved research
47:42:00question, that the human brain is able to do,
47:46:00and I think it's the key to making further advances
47:48:00in artificial intelligence.
47:51:00The neocortex is organized in these layers,
47:54:00and each layer is more abstract than the one below it.
47:58:00And we're able to actually-- if we understand something,
48:00:00like we understood speech recognition,
48:03:00we can actually identify phonemes should be here,
48:05:00words should be here, and we can create that hierarchy,
48:08:00and then use machine learning to learn each level.
48:12:00But how do we then create a more abstract level on its own?
48:16:00Because the neocortex does that.
48:19:00I've been watching my grandson go through level after level.
48:22:00Now he's almost three, and he's got quite a few levels
48:25:00under his belt.
48:27:00And that's done with the neocortex,
48:31:00without really any external input, other than his parents
48:37:00saying, that was good, Leo.
48:41:00So how do we do that?
48:44:00That's what we hope to solve.
48:46:00I think we can have a stable set of hierarchies,
48:52:00and then find patterns at that level using machine learning,
48:55:00and then automatically add a new level.
48:58:00But that has never been demonstrated,
49:00:00and if we could do that, I think we'll
49:03:00make great strides in artificial intelligence.
49:05:00But so far, that has eluded the AI field.
49:09:00RJ MICHAEL: So then I'll end with this--
49:11:00you mentioned your grandchild, three-year-old.
49:13:00There's certain definitions of the word consciousness
49:17:00that would suggest that a three-year-old has not yet
49:20:00achieved consciousness-- awareness of self,
49:22:00awareness of what's in a mirror, and things like that.
49:25:00RAY KURZWEIL: Well, he would disagree with that.
49:26:00But--
49:26:00
49:28:00RJ MICHAEL: So I would like to end, then,
49:30:00with three simple questions, that I would
49:34:00ask you to take all together for us.
49:36:00What is consciousness?
49:38:00What is free will?
49:41:00And what is soul?
49:42:00
49:46:00RAY KURZWEIL: Well, I always thought
49:47:00you had a good sense of humor, so one minute
49:52:00should be plenty for that.
49:54:00[LAUGHTER]
49:56:00We've debated that for thousands of years,
49:58:00going back to the Platonic dialogues.
50:00:00But to summarize, consciousness--
50:03:00whether or not an entity--
50:06:00[LAUGHTER]
50:12:00Whether or not an entity has consciousness
50:14:00is not a scientific question, because there's
50:16:00no experiment you could run that would really definitively--
50:23:00falsifiable experiment that you could run,
50:25:00whether or not an entity is conscious or not.
50:27:00We assume that each other is conscious,
50:30:00that human agreement falls apart when
50:32:00you go outside of human experience.
50:34:00People disagree about animals.
50:36:00They will disagree about future AIs.
50:40:00An AI could claim it's conscious.
50:42:00Eugene Goostman claimed that he was conscious,
50:45:00but it wasn't very convincing.
50:46:00
50:49:00And so you actually, but, so some scientists say,
50:52:00well, it's not a scientific question.
50:54:00It's not important.
50:55:00We should dismiss it.
50:56:00It's just an illusion.
50:57:00I think that's a mistake, because our whole moral system
50:59:00is based on consciousness.
51:02:00So you need a leap of faith.
51:04:00My leap of faith is that if an entity seems conscious,
51:07:00and seems to be having the subjective experiences
51:11:00it claims to be having, I'll believe it's conscious.
51:14:00I will also make an objective prediction
51:15:00that most people will accept the consciousness
51:18:00of these entities.
51:22:00And so a valid Turing Test-- I mean, I have a long, now,
51:27:00Turing Test bet with Mitch Kapor that by 2029, a computer
51:32:00will pass the Turing Test.
51:33:00And we actually set a very difficult set of rules.
51:36:00I think if an AI passes that, people will really be convinced
51:40:00that it's really conscious, and we will accept it
51:44:00as having those subjective experiences.
51:46:00Identity is really a continuation of a pattern.
51:49:00People say, what are you talking about?
51:51:00Your identity is this, you're physical stuff.
51:53:00It's this flesh and blood, but actually this is very different
51:56:00than it was six months ago.
51:58:00All of our cells turn over, some in hours,
52:01:00some in days, some in weeks.
52:02:00The different components of the neurons,
52:04:00the tubules, the ion channels, the filaments,
52:07:00change over in either hours, or days, or weeks.
52:10:00I'm completely different stuff than I was six months ago.
52:13:00So I make a comparison to water in the stream.
52:17:00It may make that certain pattern as it goes around a rock.
52:22:00That pattern can stay the same for days, weeks, years.
52:27:00But the water actually changes in milliseconds.
52:29:00So is that the same river?
52:32:00There's a Chinese proverb, you can't walk in the same river
52:34:00twice.
52:35:00But it's actually a continuation of a pattern.
52:38:00And that's what we are.
52:39:00The pattern changes slowly, but continuation of pattern,
52:44:00even if we introduce non-biological elements to it,
52:46:00we would be continuing that pattern.
52:50:00And free will, that's impossible to define.
52:55:00I'm not convinced I have free will.
52:56:00Major decisions like starting a project,
52:59:00coming to work at Google, speaking at IO--
53:03:00did I really make that decision?
53:05:00What was I thinking?
53:06:00
53:08:00They just seem to happen on their own.
53:11:00I do actually think a lot, and I think
53:13:00I do have free will deciding what to eat for lunch,
53:16:00so I think making eating choices is maybe
53:18:00the heart of free will.
53:21:00So I'll leave it at that.
53:23:00RJ MICHAEL: OK, that's very good.
53:24:00Thank you so much, Ray.
53:26:00You did just awesome.
53:28:00Thank you so much.
53:30:00Excellent.

No hay comentarios:

Publicar un comentario

Nota: solo los miembros de este blog pueden publicar comentarios.