>> build in some sort of warning system to alert some real person whenThe AI people should now be aware of the problem and maybe
>> a user is sounding dangerously depressed, as you touched on. Kids
>> would be less likely to pay attention to some disclaimer I'd think.
>> to eliminate these 'defective' people.. (TIC)Assuming the AI hasn't been programmed, or has reprogrammed itself,
That, sadly, is not out of the realm of possibilities. ;(
I think people are giving AI more credit than it deserves these days.
We watched too many Terminator movies growing up.. B)
An AI might encourage negative thinking, but likely only because it
has been programmed to encourage whatever the person chatting with
it is saying. A user is more likely to come back again to use a
system that thinks like they do and agrees with them. Likely a lot
of people are there because they don't get that from 'real' people,
which may indicate that whatever they are thinking is not quite
in the popular norm..
Other stories about an AI reprogramming itself to have more time
to complete a project it's been given is again still likely linked
to it wanting to please its 'masters'. I don't think we have to
worry about AI planning world domination and eliminating us yet.
If this message disappears, I may have to rethink that.. B)
>> to eliminate these 'defective' people.. (TIC)Assuming the AI hasn't been programmed, or has reprogrammed itself,
That, sadly, is not out of the realm of possibilities. ;(>> We watched too many Terminator movies growing up.. B)
I think people are giving AI more credit than it deserves these days.
Yes, but I do note that you said "hasn't been programmed" above, which was>the part I believe is more likely in the realm of possibility (vs.
>> has been programmed to encourage whatever the person chatting withAn AI might encourage negative thinking, but likely only because it
IOW, false encouragement.
>> domination and eliminating us yet.I don't think we have to worry about AI planning world
>"what message"? :DIf this message disappears, I may have to rethink that.. B)
I was originally going to respond, quoting this line only and ask
>> has been programmed to encourage whatever the person chatting withAn AI might encourage negative thinking, but likely only because it
>> it is saying. A user is more likely to come back again to use a
>> system that thinks like they do and agrees with them.
IOW, false encouragement.
Yes, or validation, depending on how the user views themself..
Everyone loves a 'Yes' man..
>> has been programmed to encourage whatever the person chatting withAn AI might encourage negative thinking, but likely only because it
IOW, false encouragement.>> Everyone loves a 'Yes' man..
Yes, or validation, depending on how the user views themself..
Validation was the word I was looking for, thanks. ;)
| Sysop: | Tetrazocine |
|---|---|
| Location: | Melbourne, VIC, Australia |
| Users: | 15 |
| Nodes: | 8 (0 / 8) |
| Uptime: | 79:29:19 |
| Calls: | 188 |
| Files: | 21,502 |
| Messages: | 81,619 |