Wesley: Illyria can be...difficult. Testing her might be hard without getting someone seriously hurt. Angel: We'll make Spike do it. Wesley: Good.

'Underneath'


Natter 41: Why Do I Click on ita's Links?!  

Off-topic discussion. Wanna talk about corsets, duct tape, or physics? This is the place. Detailed discussion of any current-season TV must be whitefonted.


Gudanov - Jan 04, 2006 12:05:50 pm PST #7482 of 10002
Coding and Sleeping

So we never learned if the extermination of humans was a bug or a feature?


Jessica - Jan 04, 2006 12:07:24 pm PST #7483 of 10002
And then Ortus came and said "It's Ortin' time" and they all Orted off into the sunset

So we never learned if the extermination of humans was a bug or a feature?

Depends on who the target user is, surely?


Matt the Bruins fan - Jan 04, 2006 12:11:36 pm PST #7484 of 10002
"I remember when they eventually introduced that drug kingpin who murdered people and smuggled drugs inside snakes and I was like 'Finally. A normal person.'” —RahvinDragand

Man, I wish I'd known about Roombas early last year when I was spending like wildfire on the new place. I might actually have presentable floors today.


Trudy Booth - Jan 04, 2006 12:14:35 pm PST #7485 of 10002
Greece's financial crisis threatens to take down all of Western civilization - a civilization they themselves founded. A rather tragic irony - which is something they also invented. - Jon Stewart

Its wrong for me to want a Scooba in an apartment as small as mine. I realize this.

But the white linoleum is a PITA to keep clean. If I had a little robot going every day...


Jessica - Jan 04, 2006 12:16:50 pm PST #7486 of 10002
And then Ortus came and said "It's Ortin' time" and they all Orted off into the sunset

How To Get A Human Being On The Phone

Cheat sheet guide to many common automated phone menus (banks, stores, utilities, etc).

Handy and on-topic!


shrift - Jan 04, 2006 12:18:45 pm PST #7487 of 10002
"You can't put a price on the joy of not giving a shit." -Zenkitty

Fay's reaction to the news about Sharon reminded me that the new Star Wars novel is titled: Star Wars: Dark Lord.

DARTH VOLDEMORT.


tommyrot - Jan 04, 2006 12:21:48 pm PST #7488 of 10002
Sir, it's not an offence to let your cat eat your bacon. Okay? And we don't arrest cats, I'm very sorry.

Personally (and no offence) I think our current best chances for a rogue AI will either come out of google projects mating in the dark, or all the Linux distros banding together.

I think that some benign AI program will never go rogue. If we get rogue AI it'll be a result of an espionage program going out of control, or some sort of cyber-warfare system running amok during a cyber-war.

('cyber-war' is so dated. What's the current term for computer warefare (hacking into networks as a military operation)?)


§ ita § - Jan 04, 2006 12:24:07 pm PST #7489 of 10002
Well not canonically, no, but this is transformative fiction.

I think that some benign AI program will never go rogue

Well, maybe not rogue in the moustache-twirling sort of way (and are you calling Google and Linux benign? Hmmm) but there's no reason for our priorities to be machine priorities. Worried about pollution? Kill most of the humans.


Jessica - Jan 04, 2006 12:28:01 pm PST #7490 of 10002
And then Ortus came and said "It's Ortin' time" and they all Orted off into the sunset

An AI with independent learning capabilities could easily learn a value system that's not actively hostile to humans, but still problematic. Like, TrafficAI decides one day that red lights don't really do anyone any good, and it's not going to use them anymore. Or GoogleAI and YahooAI get into a long drawn-out argument, and nobody can look anything up online for hours. And suchlike.


tommyrot - Jan 04, 2006 12:32:19 pm PST #7491 of 10002
Sir, it's not an offence to let your cat eat your bacon. Okay? And we don't arrest cats, I'm very sorry.

but there's no reason for our priorities to be machine priorities. Worried about pollution? Kill most of the humans.

OK, I'm venturing further out into conjecture-land, but I think that by the time we get to autonomous commercial AI programs (for example, one that processes insurance claims) that these sort of straightforward AI applications will be well-understood, with clearly-defined limits. For example, an AI that's allowed to access the internet would be forced to abide by correct internet protocols, and would be prohibited from hacking into other computers. And I think we'd be able to understand such programs well enough that we could be sure it was impossible for such a thing to even occur to the AI.

OK, maybe that's rather optimistic of me. But like I said before, an AI that's designed to infiltrate, attack, strategize, take over systems, etc. - I'd be afraid of those. Plus they'd tend to be top-secret, with perhaps little oversight....