views:

1532

answers:

17

Oppenheimer and the bomb are often invoked to illustrate the limits of what science and technology should do (rather than what it can do). Are there a computer science or programming problems that deserve a similar level of moral reflection before they are solved?

+2  A: 

See the "Favourite Colour: Myrtle" thread from tonight :-)

unforgiven3
+5  A: 

Skynet comes to mind!

Shahin
You're only seeing the bad side of things. Imagine what those robots could do for us ...
JaredPar
It's science fact that they will become self aware and launch an attack on the human race. SCIENCE FACT!
Shahin
+1 for the science fact comment. I've been chuckling about that for the last 5 minutes. It's my goal to use that in a conversation tomorrow.
JaredPar
+1 For making that your goal ;-)
Jasper Bekkers
+2  A: 

the usual argument is P=NP because of the risk to existing encryption schemes.

Uri
Though with a proof for P = NP, we would find quite a lot of other tasks within reach of achievable automation. Existing encryption schemes will eventually fall for other reasons (what's computationally infeasible today may not be infeasible tomorrow, due to sheer speed-up of computation).
Vatine
+14  A: 

It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter. (Nathaniel S Borenstein)

Jason Baker
Thank you for saving me the trouble of looking this quote up.
Dave Sherohman
Unless Baghdad is the actual problem domain ;-)
Jasper Bekkers
+16  A: 

Are there things you as a computer scientist (or a person) shouldn't do, yes. Are there open problems in computer science that shouldn't be solved, no. Even if something can be used for evil, it doesn't follow that it shouldn't be studied. In so far as CS is basically math, it's problems (in the math sense) are relatively amoral. That doesn't mean that the uses to which it is put are. That's where the ethics come in.

tvanfosson
+2  A: 

Designing email messaging algorithm that can't be detected by a spam filter :)

JaredPar
+1  A: 

I think we all (know it or not) have a vested interest in factoring very large numbers continuing to be hard. But I expect massively parallel molecular computing to solve that whether we like it or not.

ysth
+4  A: 
Ellery Newcomer
To summarise.. Skynet!
Shahin
+3  A: 

the short answer is: no

the longer answer is:

Oppenheimer and the bomb are often invoked by people who would rather the Allies had lost WWII - so I don't put much stock in their opinions

the progress of science is inevitable; things cannot be un-invented

blaming the tool is what children do; adults take responsibility for their own actions

[and drive-by downvoting an answer you don't like is spiteful and cowardly; a downvote is supposed to mean "not helpful", not "I don't like this answer/person". If you disagree, say so and say why, one or both or us might learn something]

Steven A. Lowe
I didn't downvote but I disagree with your statement regarding the Bomb. People who would rather the Allies had lost WWII are morons. But a lot of reasonable people see the moral implications of building the bomb (Feynman one of them!) and reject it. I don't know where I stand but I see the dilemma.
Konrad Rudolph
Good answer, Steven, except for 3rd sentence. You can do better. BTW, I hope your book is coming along.
Mike Dunlavey
@[Konrad Rudolph]: @[Mike Dunlavey]: thanks for your comments and kind words. There is no moral issue in building the A-bomb, especially when you have reason to believe your enemies are building one too. The moral decision is whether, when, and how to use it or not.
Steven A. Lowe
@[Konrad Rudolph]: @[Mike Dunlavey]: I would also add that similar arguments can be made for constructing any weapon of any kind - or a government tracking database, or a database engine used to build a government tracking database, etc. Morality is in the hands of the user, not the tool or inventor
Steven A. Lowe
@[Konrad Rudolph]: @[Mike Dunlavey]: although i would of course draw the line at building an obvious Doomsday Device like the Large Hadron Collider ;-)
Steven A. Lowe
It's not that there's no moral issue, it's that the moral issue is not simple. IMO it's part of the tragedy of human nature that war degrades all sides and reduces human beings to a simple, brutal, heartless calculus.
Mike Dunlavey
@[Mike Dunlavey]: when survival is at stake, the essentials of nature move to the forefront. Civilization is something that happens during peacetime.
Steven A. Lowe
@[Mike Dunlavey]: the point of science is to learn, to advance human knowledge. Some of the things we learn are horrifying, and some of the tools we build can be misused - but that applies to hammers as well as physics. Your trigger finger makes the moral decision, not the gunsmith.
Steven A. Lowe
Right. It's part of our species' stupidity that we forget how awful war is until we blunder into it yet again. Maybe you have to be my age to appreciate that. I see kids happily playing shoot-up games. I knew a sailor who witnessed a real H-bomb test. Not easy to talk about.
Mike Dunlavey
A lot of what we talk about on SO is which is which coding practices are more important, personal freedom vs. what is good for everybody. We want the freedom to make moral decisions, but we don't want those other moral infants to have it.
Mike Dunlavey
@[Mike Dunlavey]: very true. But sometimes the only thing worse that fighting a war is not fighting a war; when all other options are exhausted, survival trumps everything else.
Steven A. Lowe
A: 

If it is proven that P=NP, would we all lose our jobs or make more?

  • More things solvable, even on iPhones
  • One algorithm could solve anything (once reduced)

Won't happen ever though (on silicon binary computers), so can put that one to bed.

Overflown
Actually, it's the halting problem that would allow one algorithm to solve anything, and that's (fortunately?) impossible to solve.
David Thornley
In my view, programming is ideally about stating problems, and only incidentally about solving them. So not to worry.
Mike Dunlavey
"halting = everything ≠ SAT" -- nearly all practical (yet formalized) problems are in NP.
Jonas Kölker
@ David Thornley solving the halting problem only means you know when an algorithm will halt in finite time. It says nothing about whether you have the correct algorithm or that it will finish solving your algorithm by the time the universe ends.
Unknown
Why has this been downvoted three times? The only thing that is slightly off is the second bullet.
@ Jonas Can you expand on your comment? Sorting is a practical problem that is not in NP. So is search. So is collision detection.
+5  A: 

Quantum computing?

"This universe has performed an illegal operation and will now implode"

Uri
+10  A: 

Strong AI.

Whilst I believe that Strong AI is laughable, there are those that are pouring money and resources into trying to create intelligent computers on par with human beings, with the ultimate goal of being able to essentially create an artificial life-form with perceived consciousness.

This in itself opens an endless bucket of ethical issues. If humans are capable of creating 'life-forms' exceeding our own ability then will we really value our own? What steps are after creating beings that are far superior to ourselves? Will we eventually rebuild ourselves and speed up evolution? Will we use this power to exceed the human-bounds of knowledge and try to discover life's greatest questions, things we would have to adapt ourselves into understanding?

It's all a bit crazy, but proving the existence of Strong AI would vastly stretch the bounds of our capabilities as human beings. We could create a utopia, but basic human nature dictates that the power would create unimaginable destruction.

EnderMB
You hit the question on the noggin. I think it's not as important to answer the question as to see if this thing we call "human nature" can be improved faster than the technology. When people on SO talk gleefully about combat games, I'm discouraged.
Mike Dunlavey
Strong AI at least as strong as a human is almost a certainty. Many neuron connections that have been mapped out have a electric circuit version. Whether or not this can perpetually create smarter AI without chance or combinatorial trial and error (evolution) is an entirely different question.
Unknown
+2  A: 

I think there are some data-mining projects that you probably shouldn't work on.

Anything which is really going to extinguish humanity's last shreds of privacy.

interstar
+2  A: 

What makes you think that anybody can declare a research area closed? If it's got potential, somebody's going to be working on it. What well-intentioned people can do is ensure that, when a new technology is developed, it will be developed by ill-intentioned people.

Consider the atomic bomb. The Manhattan Project was not the only such program. Germany had one (that we found out postwar went wildly astray), and the Japanese had two (one for the Army, one for the Navy - they weren't big on interservice cooperation). The drive to make the A-bomb was based on the belief that Nazi Germany couldn't have been allowed to get one first, and that fear was reasonably well-founded at the time.

David Thornley
A: 

Well, I can think of some purely malicios programs. Say "a virus that is able to spread to every device connected and is impossible to eradicate without complete memory wipe of all possible memory devices in the computer". Something like the Ultimate Virus. I don't know if it's possible, but I'm sure that nobody should attempt to succeed at this. ;)

Vilx-
+2  A: 

I need to emphasize the none answer once more.

Any other answer displays a deep misunderstanding of science. There must be no forbidden questions because this would break the whole system. The whole notion that there are questions that should not be explored is inherently anti-scientific.

On the other hand, I don’t think (like tvanfosson) that CS is necessarily amoral. Questions of strong encryption in particular raise a whole host of moral issues that need to be addressed by software architects (believe me – it’s better that way! At the moment, politicians all over the world try to address issues they don’t understand, with catastrophical and often ridiculous results).

Is this a problem? Well, it might be one since there are dangerous answers. But I still believe that the danger posed by these answers cannot be countered by ignoring the question. Rather, we need to explore even further. Nothing, nothing is more dangerous than lack of knowledge (again, I refer to the abovementioned politicians as just one example).

Now, this has been rather general but yes, it also applies to computer science. In particular, answering the question P=NP isn’t dangerous at all. What may become dangerous is if the answer unexpectedly were “yes.” In that case, we would need to rebuild much of today’s IT infrastructure from scratch. But on the other hand, we would get an untapped problem solving potential.

Konrad Rudolph
+2  A: 

If the question is about CS, I'm not so much worried about programs that might get loose in the world's computers, at least in the short term.

With my AI background, I'm used to thinking of people's heads as computers. The programs that get loose in those are really scary. Examples are fundamentalism of all kinds.

Mike Dunlavey