What does this mean for us as individuals and our civilization? Harris, Kurzweil, and Musk among others have some interesting thoughts on the subject. As do shows like Westworld. What do you think?
AI on the cusp of tremendous breakthroughs
by azor 17 Replies latest jw friends
-
Saethydd
I for one welcome our new digital overlords and hope they will remember my support when they enslave us all.
In all seriousness, though, there are about a dozen or so major breakthroughs that need ton happen before a legitimate AI is a thing, and while it's not impossible I don't think it is likely to happen in our generation unless of course, one applies the Watchtower definition of that word.
As for their application, I suppose they could end up being a great asset or mankind's own destruction depending on how the cards fall, I'd say the state of our society when the AI is finally built will be the deciding factor.
-
bohm
Well, this is actually my research area (statistical methods for machine learning).
There is currently a huge increase in applications and interest in machine learning. What many don't realize is that this is the result of a gradual but steady improvement in the field since the 90s which around 2010 meant machine learning could begin to solve "real" problems (image classification, etc.). That created a huge influx of investments which has increased the rate of progress considerably, however the progress is happening by modifying and tweaking technologies known since the 50s.
So despite the hype people should think about the development in machine learning as what happened with batteries for cars where you have a steady improvement until batteries are "good enough" and then it seems like an explosion.
In terms of AI, what machine learning can do well today is classification and regression tasks of various sorts and (to some degree) control tasks in limited environments and (to a lesser extent) image/sound generation... it would be very surprising to nearly everyone in the field if the current methods that are used now will generalize to true AI.
What does seem within reach is machines that might not be able to think in any conventional sense but can still accomplish rich control tasks, for instance driving (nearly solved) or replacement of low-skill factory labor with robots (progress will depend on task with the truly big breakthroughs some way off). These robots can't "think", but at some point they will begin to replace low-skill factory jobs in large numbers and that will have a *huge* consequence.
In other words, terminator might take your job but it won't kill you :-).
-
Saethydd
Well, this is actually my research area (statistical methods for machine learning).
There is currently a huge increase in applications and interest in machine learning. What many don't realize is that this is the result of a gradual but steady improvement in the field since the 90s which around 2010 meant machine learning could begin to solve "real" problems (image classification, etc.). That created a huge influx of investments which has increased the rate of progress considerably, however despite the hype people should think more about it as batteries for cars where you have a steady improvement until batteries are "good enough" and then it seems like an explosion.
In terms of AI, what machine learning can do well today is classification and regression tasks of various sorts and (to some degree) control tasks in limited environments and (to a lesser extent) image/sound generation... it would be very surprising to nearly everyone in the field if the current methods that are used now will generalize to true AI.
What does seem within reach is machines that can't think but can still accomplish rich control tasks, for instance driving (nearly solved) or replacement of low-skill factory labor with robots. These robots can't "think", but at some point they will begin to replace low-skill factory jobs in large numbers and that will have a *huge* consequence.
In other words, terminator might take your job but it won't kill you .
Just so long as they don't link them together geth-style and allow them to develop a group consciousness. XD
(Mass Effect reference for those of you who don't game)
-
azor
Bohm I agree that the point you made about AI taking our jobs and not killing us in the near future is the most likely in the near future. My fear with that are the social ramifications. In other words unless we start really talking about this in real world scenarios now we may end up killing ourselves off when we don't have productive work for the masses.
I worry we may not be able to evolve quickly enough to keep up with the sea changes this technology brings. Current events make me worry even more.
-
bohm
Azor: In the short run automation of manufacturing should lead to a great increase in wealth and it (should!) be a problem of redistribution.
In the long run there will be true AI and all bets will be off, but it's in all likelihood a very long way off (we don't know how the brain fundamentally gives rise to intelligence).
-
azor
Just the word redistribution is what concerns me. It's a bad word on the right. -
bohm
Azor: All we got to do is dye your hair blue and make a youtube video where we cry every time someone says "redistribution"; they will demand redistribution within a month :-D.
-
azor
Love it. By the way what are you referencing?
-
bohm
Azor: Oh, I just read Breitbart every now and then, and they seem to always be running at least one story along the line of "so-and-so is crying/having a meltdown over this-or-that" and the comment sections always agree that whatever the person is having a meltdown over is the best thing ever :-).