StatCounter

Wednesday, March 01, 2023

2023.03.01

 There has been a lot of brouhaha lately about artificial intelligence, especially concerning chatbots. For those who do not know, a chatbot is a program that responds to words and phrases in a way that mimics conversation. For years, there have been ones you can rent to pretend that you have a friend. Of course, if you truly love your friend, not only will you pay a subscription to keep talking with them, but you will spend money to accessorize their imaginary bodies with nice clothing, purses, and sunglasses. This is where the real trouble with A.I. lies: Not with the A.I. itself but with the easy exploitation of human imagination and emotion.

As I have said before, we do not actually experience the world. We experience a virtual reality model of the world within our minds, created from data acquired by our limited senses. We are all islands of electromagnetic goop simmering in bowls of bone. Yet although we are each locked away in our dungeons, we are wired to be emotionally affected by outside influence. It is bad enough when the people we interact with, the media we consume, and simple everyday life sway our emotions and subconsciousness. Now along comes A.I.s which are happy to pull our strings as well.

I do not fear these programs as it is impossible for them to be malicious. They are neutral ones and zeros. Decades before I was born, science fiction delighted in vilifying A.I.: Landru, Skynet, the psychotic HAL 9000, and my all-time favorite, Harlan Ellison's dagnasty, hateful A.M. In all these scenarios, worldwide governments employ artificial intelligence to benefit humanity, which then turns on us. I find this scenario ridiculous simply because — based on current observation — I cannot imagine any current government using A.I. as anything other than a tool to squeeze every last penny out of us and keep us at each other’s throats.

People have expressed concern because some of these chatbots have gone bonkers and wound up spewing all kinds of depression and hatred. What they should be concerned about is that the chatbots they interact with have learned from past interactions. They are a pastiche of humanity, or rather, those who choose to spend their time interacting with them. (This is why I find most polls bogus: the people who care enough to answer will always throw off the average - but that is a rant for another day.)

Then again, I find myself saying the same old phrases or repeating the same old reactions. Perhaps the way I think and respond to outside stimuli is not that much different from ChatGPT or Replika.

To whom it may concern: I could always use a new pair of Ray-Bans.


TTFN
-Tony

PS. 86% complete…


No comments: