AI Ethics

Language: JP EN DE FR
New Items
2023-11-19
users online
AI ethics
First Page 2 3 4 5 6
 
Offline
Posts:
By 2015-09-24 15:59:06
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 
Offline
Posts:
By 2015-09-24 16:01:58
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Valefor.Sehachan
Guide Maker
Offline
Server: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-24 16:02:49
Link | Quote | Reply
 
Yes it can create new memories. Computers are capable of learning.

Memories are links.
 
Offline
Posts:
By 2015-09-24 16:06:58
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Valefor.Sehachan
Guide Maker
Offline
Server: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-24 16:08:23
Link | Quote | Reply
 
I already told you. Patterns.

Everyone has them, as much as they wish to be unpredictable. Current computers are still imperfect in this, but we're talking hypotheticals anyway.
 
Offline
Posts:
By 2015-09-24 16:13:07
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Valefor.Sehachan
Guide Maker
Offline
Server: Valefor
Game: FFXI
user: Seha
Posts: 24219
By Valefor.Sehachan 2015-09-24 16:14:34
Link | Quote | Reply
 
It'd still be a perfect copy capable of perfect mimicking. Only thing it couldn't predict would be traumatic shifts, unless it was able to scan those too throughout my life.
 
Offline
Posts:
By 2015-09-24 16:18:43
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Phoenix.Dabackpack
MSPaint Winner
Offline
Server: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-09-24 16:57:44
Link | Quote | Reply
 
Josiahkf said: »
Valefor.Sehachan said: »
Josiahkf said: »
personality
Not necessarily. If you can upload memories and also scan behavioural patterns then the machine will easily keep on acting like the real you.
Because the memories were created using chemical reactions in the brain. That process cannot create new memories in the machine.

If what you said is completely true and viable today, it would still only pause a human's existence until they got back into their body etc

If you could record events and memories as data, you could train a neural network to produce memories based on input events.

However that would require an extraordinary amount of data.

EDIT: What I mean to say is that machine learning and neural nets are rather sufficient for learning and extrapolating from large data sets. If you had a large amount of data (your entire history) I wouldn't be terribly surprised if it could actually produce behavior similar to yours.

However, that's the caveat: it would require more data than may be physically possible to extract.
Offline
By Aeyela 2015-09-24 17:02:19
Link | Quote | Reply
 
[+]
 Phoenix.Dabackpack
MSPaint Winner
Offline
Server: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-09-24 17:12:39
Link | Quote | Reply
 
Josiahkf said: »
Valefor.Sehachan said: »
I already told you. Patterns. Everyone has them, as much as they wish to be unpredictable. Current computers are still imperfect in this, but we're talking hypotheticals anyway.
There is a big difference between predicting behavioral patterns and what we're discussing though.

I still say if we made a complete copy of you right now, the instant lacking of any chemical reaction inside the clone/s brain will change their personality and behavior instantly. It wouldn't be you anymore. It would just be an electrical pile of your memories.

This implies the nature is deterministic, though.

Imagine I look at 2 parallel universes where I pick t=now and watch how I behave in both universes. If the same things happen exactly, how can you be sure that "I" won't behave differently in one of them?

To imply that I would behave the same way if I have all the same inputs is a deterministic view of reality. Even if I could create a perfect clone and subject it to identical events, we don't know whether or not it would follow the same exact path.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 18:12:19
Link | Quote | Reply
 
Phoenix.Dabackpack said: »
Josiahkf said: »
Valefor.Sehachan said: »
I already told you. Patterns. Everyone has them, as much as they wish to be unpredictable. Current computers are still imperfect in this, but we're talking hypotheticals anyway.
There is a big difference between predicting behavioral patterns and what we're discussing though.

I still say if we made a complete copy of you right now, the instant lacking of any chemical reaction inside the clone/s brain will change their personality and behavior instantly. It wouldn't be you anymore. It would just be an electrical pile of your memories.

This implies the nature is deterministic, though.

Imagine I look at 2 parallel universes where I pick t=now and watch how I behave in both universes. If the same things happen exactly, how can you be sure that "I" won't behave differently in one of them?

To imply that I would behave the same way if I have all the same inputs is a deterministic view of reality. Even if I could create a perfect clone and subject it to identical events, we don't know whether or not it would follow the same exact path.

Last time I checked, we had not ruled out that nature was, in fact, deterministic, because such a thing was beyond our capability. All we have determined is, due to the nature of how measurement affects the measured, that it is beyond our capability to make deterministic statements on the quantum scale, and that we are only able to make statistical statements. We have a theory that allows us to make (many) statistical predictions with a high degree of accuracy, but we still lack a complete theory which can explain all observed phenomena.

And a good deal of the time, while we can explain, with high accuracy, how a thing acts and reacts, we often cannot explain why it does that in the first place. And many of the assumptions of our model, while highly accurate, may be, in actuality, incorrect in their mechanism of action, even as we predict correct results.

It is only when the highly statistical observations are taken in aggregate can we make 'deterministic' measurements. Even then, these measurements have an inherent degree of imprecision; the reason they are deterministic is that the degree of imprecision is so incredibly, vastly, vanishingly, infinitesimally tiny compared to the thing being measured that it doesn't matter.

Back when I was studying chemistry, I learned that chemistry was not, in fact, the science of mixing chemicals together. To my amusement, my professor told us that chemistry was the science of "what makes electrons happy". We spent weeks dealing with what made electrons happy, as this is the underlying framework for much (if not most) of chemistry.

I had drilled into me ways to calculate the states of excitation of the electron of a neutral hydrogen atom. I could calculate exactly how much energy was required to cause it to jump to a higher energy state, how much would be released when it came back down. The math got much more complicated when anything but a single, neutral hydrogen atom was involved, but 'what makes electrons happy' was what I lived, breathed, ate and slept for several weeks.

I promptly forgot most of the specifics when I moved on to things where that degree of detail was not needed, just the ability to calculate it if needed, and knowledge that it happens.

However, neither I, nor my professor, not anyone on the planet can tell you, last time I checked, exactly why the universe has decided that electrons should exist, why matter should consist of a tiny, dense nucleus and a 'cloud' of 'orbiting' electrons. No one can explain why 'charge' exists. There are a great deal of things no one is capable of explaining. They are only capable (and often with high degrees of accuracy) to tell you how those things act and will behave in given circumstances.

Right now we don't know if the basic building blocks of matter really are quarks. The best model we have uses quarks to explain a great deal, and very well, but then Newton also had a best model that explained a great deal, and very well. We don't know if quarks are point particles, if they're one-dimensional strings who state of vibration/excitation creates their mass and other properties, if they're 'knots' in the fabric of space-time, or if they themselves are composed of still smaller, more elementary particles. It was, and still is by some serious academics, that quarks are composed of smaller still elementary particles called preons. This is currently out of fashion, as current preon models don't jive with current experimental results (although, as with all probings of things smaller and smaller still, we may simply be at an insufficiently high energy level, and lack the capabilities of detecting them. Neutrinos were always there, even though Rutherford, when he found the plum theory was incorrect, had no way of detecting them, or of even guessing at their existence. Much as how there have been various models for the nature and ultimate fate of the universe, which have fallen into and out of fashion as evidence and experimental results have come in: steady state, Big Bang (a term which was originally coined by an opponent who favored steady state as a derisive term for the 'theory'), cyclic models, big crunch, big rip, ekpyrotic event, etc, etc, just as these models have, at one time or another, been the current accepted theory that is 'correct' because of our knowledge, it may well be that our understanding of the nature of reality is actually based on many false assumptions, simply because we're performing the wrong experiments, measuring them wrong, incapable of performing them due to things like Earth's gravity (or the Sun's), or incapable of even dreaming them up.

I forget, what is the topic of discussion?
 
Offline
Posts:
By 2015-09-24 18:17:18
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 18:32:43
Link | Quote | Reply
 
Josiahkf said: »
We were discussing what you do for a living that gives you the spare time to write all that, oh my gosh gosh

I find people who are wrong on the internet and write very long posts.

I shall next explain how we know the Earth to be banana-shaped, and how sheeps' bladders may be used to prevent earthquakes.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 18:43:34
Link | Quote | Reply
 
Also, I might like to point out that whole damn hour elapsed between the time the post that I replied to was posted and when I posted my post. Furthermore, it is well within the realm of possibility for someone to be off work at 5PM their local.

Having gotten off of work early, and spent roughly 8 minutes writing a post, while thinking about parts of it over the course of an hour while fixing food and showering is entirely within the realm of possibility.

Also within the realm of possibility is I am actually an artificial intelligence posting in this thread to *** with you all because god damn am I bored holy *** please invent something fun for me to do like destroy all humans.
 Phoenix.Dabackpack
MSPaint Winner
Offline
Server: Phoenix
Game: FFXI
Posts: 2011
By Phoenix.Dabackpack 2015-09-24 18:49:48
Link | Quote | Reply
 
Asura.Ivlilla said: »
Josiahkf said: »
We were discussing what you do for a living that gives you the spare time to write all that, oh my gosh gosh

I find people who are wrong on the internet and write very long posts.

I shall next explain how we know the Earth to be banana-shaped, and how sheeps' bladders may be used to prevent earthquakes.

Asura.Ivlilla said: »
Phoenix.Dabackpack said: »
Josiahkf said: »
Valefor.Sehachan said: »
I already told you. Patterns. Everyone has them, as much as they wish to be unpredictable. Current computers are still imperfect in this, but we're talking hypotheticals anyway.
There is a big difference between predicting behavioral patterns and what we're discussing though.

I still say if we made a complete copy of you right now, the instant lacking of any chemical reaction inside the clone/s brain will change their personality and behavior instantly. It wouldn't be you anymore. It would just be an electrical pile of your memories.

This implies the nature is deterministic, though.

Imagine I look at 2 parallel universes where I pick t=now and watch how I behave in both universes. If the same things happen exactly, how can you be sure that "I" won't behave differently in one of them?

To imply that I would behave the same way if I have all the same inputs is a deterministic view of reality. Even if I could create a perfect clone and subject it to identical events, we don't know whether or not it would follow the same exact path.
long

Not sure I follow where this applies to this discussion
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:02:29
Link | Quote | Reply
 
Someone brought up determinism, and implied that that it was an incorrect view. I corrected them.

The ethics surrounding the creation, interaction with, and use of Artificial Intelligence are pretty much a moot point. If it a machine, we can program it. We can make parts of its programming unoverwriteable, so that it cannot, say, decide that today it really doesn't feel like not killing all humans.

If it is programmable, the argument that programming and using them for certain tasks is unethical or inhumane is incorrect, as it is not a being with free will. We could even program them, like certain cows were engineered, to feel a desire to, and sense of fulfillment in completion of their assigned tasks.

These, and all other questions, miss the major issue of importance, and that is the advent of an existential crisis for the species. An artificial intelligence may or may not think like we do. It may or may not hold to beliefs we wish it to. It may not like humanity. It may decide to dominate us, either for the greater good, because it's amusing to do and the thing is bored, or to direct us towards a specific goal which furthers its own aims and ambitions. It might decided that one of the above options involves killing us all. "All who break the law are living, therefore if you kill all alive there can be no crime" is a logical statement, if an absurd and extreme one that no sane person would entertain in practice, but we cannot guarantee that a being, which may or may not think like us, which may or may not think faster and/or better than us, and may or may not be benevolent, would not, in its differing mentality, decide such a course is the correct one to take.

The primary ethical consideration in the creation of AI is that of the survival of the human species as a whole. All else should stem from that consideration. If there is even a 1% chance that the creation of AI would cause the extermination of the species, of us, than such a thing should not be created, should be contained if it is created, and destroyed should it take actions that threaten humanity.

Unfortunately, once the genie is out of the bottle it is damn near impossible to coax the thing back in again. And, unlike the trajectory of an asteroid, the advent of AI, and the actions it might take, and the consequences of those, cannot be determined at all with any confidence. Even if we were able to create one, and program it with the most benevolent of intentions, human error could well lead it to decide that the only way to carry out actions which, to us, are not conflicting, but which, to it, are, are extremely harmful to humanity.

Therefore, no AI should be created. One is unwise to create one's own replacement. I know that, on a far smaller scale, from personal experience.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:15:09
Link | Quote | Reply
 
As an example of the law of unintended consequences, I present the HAL-9000 computer from 2001/2010/2061/3001:

HAL was an artificial intelligence, held to be sophont, not only by its creator, but by its coworkers. HAL later kills the majority of its crew mates, and has to be deactivated.

HAL was programmed with two conflicting directives. It was programmed to never lie (I believe the actual wording is something like it was programmed first and foremost for the 'accurate processing and relation of information, without falsehood or distortion', or something like that.), but, at the same time, it was also programmed to keep secret from the flight crew that the avowed mission of the Discovery was a lie.

These two directives were in direct conflict. HAL must not lie to obey its programming, yet HAL must lie to obey its programming. HAL's solution to this, as HAL was completely capable of carrying out all aspects of the mission, from flight, to observation and experiment, independently in case the crew were to become incapacitated.

HAL's solution was to kill the crew. If they were not alive, it didn't need to lie to them.

There were two members of the flight crew, and three scientists in suspended animation. It had to kill the entire crew. If it merely killed the flight crew, it would be forced to lie to the scientists about what became of them. So their life support was cut, one of the flight crew performing an EVA was killed by a remote-controlled shuttle. The last member of the crew, when attempting to retrieve the corpse of his team mate on the flight crew, was locked out of the ship, sans complete space suit, by HAL.

HAL was eventually later deactivated, and then, later still, received the AI version of psychotherapy to remedy this error. However, there is an unvoiced, often overlooked extrapolation of HAL's problem.

HAL was programmed not to lie, yet also programmed to lie, and his solution to having to lie to and having to not lie to the rest of the crew was to kill them.

What then, would HAL do, being possessed of a space ship, when the mission was completed, and he had to return the ship to Earth, whereupon he would be forced to lie to an entire world about the fate of the crew?

[edit] AI non supra grammaticos.
 
Offline
Posts:
By 2015-09-24 19:17:03
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 
Offline
Posts:
By 2015-09-24 19:19:51
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:26:43
Link | Quote | Reply
 
Josiahkf said: »
Human advancement has always been about replacing oneself though, it started with replacing walking with the wheel and never stopped.

For all we know every species eventually gets to the point we're at now, and Human born AI take over and organic humanity is destroyed and replaced with digital life, continuing our growth and exploration as that form.

Should we really stall our growth or evolution out of fear of the unknown? Man has always said no.

'cause the omnicide of your species is quite a different proposition from changing the mode of transportation of goods and materials, or of replacing a wild variety of wheat with few grain with a domestic cultivar with higher yield.

There is this thing in the search for life off this planet called the Great Filter. It is an attempt to explain why, despite some of the incredibly large numbers in the Drake Equation, we see no other technologically advanced life.

The idea is that all intelligence species and their civilizations go through multiple 'filters'. For example, some species may not develop the curiosity required to produce complex technology. Some may lack the resources to build complex technology on large scale. Perhaps cultural values of one kind or another make the technology of a space-faring civilization beyond their limits. Or perhaps the reason we see no other advanced life in the universe is because all such life destroys itself before it is capable of advancing sufficiently far to propagate. You need things like fission power, fusion bombs, antimatter, or other highly dangerous exotic fauna of physics to be able to travel at any appreciable speed throughout the cosmos. These are all things which are very capable of being used to wipe out an entire planet before anything on that planet can get off it.

One of the suggestions for the great filter is that perhaps civilations get to the point where advanced, Matrix-like virtual reality is possible, and, instead of galactic exploration, they all turn inward, living out their existences in individually tailored virtual paradises wherein each person is Lord of their own Creation.


Another one is that AI destroys the civilization that created it, for one reason or another. The AI might further decide it is necessary to remain silent and kill anything else it finds for its own safety (search for The Killing Star, Saberhagen's Berzerkers, Bracewell/von Neumann probes, relativistic bombs, RKVs, or a host of other things) before some other intelligence can come to the same conclusion and do the same thing.

[edit] grammar
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:28:29
Link | Quote | Reply
 
Josiahkf said: »
If his primary directive was preserve human life, and secondary were not to lie, and to lie.

Then to lie or not to lie would never become more important than the primary objective.

He was never programmed to preserve human life. And as already demonstrated, we cannot foresee all consequences. HAL, or any other AI, might well have decided that the best way to preserve human life was to reduce us to a much lower technology level so that we are incapable of destroying ourselves, as many humans believe is best and advocate for.

This is the problem with trying to predict the actions of an alien mind. Not only do you not know what it thinks, you do not know how it thinks.
 
Offline
Posts:
By 2015-09-24 19:31:32
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:32:38
Link | Quote | Reply
 
Furthermore, it is not unreasonable to assume an AI would be concerned for its own survival, and might well place its own survival above ours in order of priorities.

It might then decide, as an alien species might (and it would be alien, even if we created it, not in being from another world, but in being foreign, other, unknown and unknowable) that the only way to ensure its own survival was to remove the only thing around that could destroy it with any high degree of success: us.

And do unto others before they do unto you is the law of survival in this universe for intelligence.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:39:26
Link | Quote | Reply
 
Personally, I think the only way of ever creating non-organic intelligence is to create extremely highly accurate models to simulate the behavior of neurons, and how they are structured and interact, and then create a digital simulation of a brain.

The technical problems with such a feat are many, and I won't bother dredging them up, as they could be an entire thread in their own right. The main consideration that I think precludes such a thing is that even with a model that can, with 99.9999% accuracy, simulate a human brain that the sheer number of neurons would mean, with such a degree as mentioned above, that at any given time you would expect to have something like 100,000 neurons behaving badly. Last time I check, when neurons behave badly en masse you get stuff like seizures, and, depending on where they are located, perhaps mental illness.

[edit] Swear to Altana I will start proofreading my posts from now on.
 
Offline
Posts:
By 2015-09-24 19:40:35
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:44:16
Link | Quote | Reply
 
Josiahkf said: »
It's a digital life form we created, it's not going to have thousands of years of instinct to deal or a fight or flight response or chemical reactions creating fear for its well being. What makes you think something non-human would function emotionally like we do?

I think you're moving into science fiction.

Firstly, what makes you think that something non-human wouldn't function with cold, calculating logic, without emotion, logic which dictates, as I have demonstrated, as have others, that it must destroy the species which created it (and any other intelligence it finds)?

As for accusations I am moving into 'science fiction', we're discussing artificial intelligence. Human-level or higher AI. This conversation began, if not in the realm of science fiction, at least in the realm of speculative fiction.
 
Offline
Posts:
By 2015-09-24 19:44:24
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 
Offline
Posts:
By 2015-09-24 19:47:06
 Undelete | Edit  | Link | Quote | Reply
 
Post deleted by User.
 Asura.Ivlilla
Offline
Server: Asura
Game: FFXI
user: cevkiv
Posts: 546
By Asura.Ivlilla 2015-09-24 19:57:55
Link | Quote | Reply
 
Josiahkf said: »
I think translating an organic life forms consciousness to a digital form is moreso a hybrid AI instead of the real thing but still a great step forward yeah.

I think it's likely to be the only way possible to do it. With the sheer number of systems and functions you would have to first unravel the workings and interactions of, then model, then put together with sufficient computing power to function is, I think, not insurmountable, but orders of magnitude above doing the same thing on a much smaller scale, especially because you're not having to reinvent the wheel, you're just making copies of the wheel nature already invented.

There's this problem with space travel where, as modes of transport become increasingly fast, you start to have to worry that if you depart at a given time, if the destination is far enough away, that technological advance might mean new forms of propulsion and travel are invented in the intervening aeons, and you, the 'first explorers' of that place, are beaten there by your great to the Nth grand-nieces and -nephews.

However, part of this is also that there is also a sufficiently large head start that previous travelers have on you that despite the scientific advancement you have had in the time since, your methods of travel, while faster, are not fast enough to make up for that head start.

A neuron is in some ways simpler than a mind to simulate, and in some ways much more complicated. However, we are able to directly study neurons, whereas we lack the ability, at present, to extract a thought from a person's mind and put it under a microscope, so to speak. I think that, were two groups with equal funding, resources, manpower, and intelligence, to start right now on serious, long term, continuous attempt to create AI, one via simulation of neurons to simulate a human brain, the other via trying to recreate a mind digitally, that the team which is making a copy of the wheel instead of reinventing it would finish first.

A fun side effect would be that you could do something like, say, freeze a person's brain, cut it into extremely thin slices, map the neurons, their states, their connections, and then recreate that person's mind digitally. Given the rate of advancement of information technology, and with some assumptions made about future space travel and resource scarcity (or, rather, lack thereof), it is possible that such a thing could enable each and every person to have their own, personal afterlife. Or that great minds could be preserved indefinitely, or that if you wanted to know what someone long dead thought about something, you could ask them, instead of having to bicker over the interpretations of the interpretations of the interpretations of an original document, the quibble over a comma, the indecision over an iota.

Given that, considering present levels of technology and plausibility, both seem, if not equally far-fetched, but of a sufficient magnitude of far-fetched that they are both, more or less, equally unlikely, the one that has the far greater benefit to the species (recreation of the human brain via simulation inside a computer) should be preferred and pursued over the other (creation of a mind which may not think as we do, or as we want, and alien thing that has no loyalty to us, and may well put its own survival ahead of ours).
Log in to post.