A.I. Tools - ChatGPT, StableDiffusion images, etc. Your thoughts?

ChrisFL

Disney/Universal Fan and MALE
Joined
Aug 8, 2000
Messages
9,213
I've been trying to keep up with all of these changes, but it's almost impossible to keep up or fully grasp what could be happening.

Still I'm intrigued and scared in possibly equal parts.

One thing I noticed is where people are making assumptions about the limitations of it, but it's improving so quickly, those previous limitations are being overcome very quickly.

I might share more thoughts but wanted to know what others here have to say.
 
AI is one of those tools that has a lot of power to do both good and bad. For example, one project that's just starting is called Forever Voices (@forever_voices on Twitter). This project uses approximately 5 minutes of audio of someone speaking and from there can create an intelligent chat bot that responds in the sound of their voice. For example, the creator has a video on the Twitter account showing Steve Jobs talking about 2020 events and it sounds just like him!

AI is also great for enhancing home videos that were previously blurry and less clear by filling in missing pixels with data to enhance the picture. For example, the RetroWDW podcast released this AI enhanced ride through of the Horizons attraction which is probably the best quality footage I've ever seen: Horizons: Revisited - AI Enhanced HD wide-angle ride through! All 3 Endings
 
While I think there would be some amazing applications that can come from this, I am really much more afraid of the possible negatives. I think it will be used to manipulate people. I can see governments using it to create propaganda that can start wars. Propaganda is already hard to weed through to try and discern what is true and what is not. Add this component and I find it terrifying. On a more individual level, social media is already used to bully others. This could add a whole new level to that with false scenarios. I worry it could even make it harder to determine innocence or guilt in crimes. For me personally, the cons do not outweigh the pros-at least the ones I can think of anyway.
 
AI has become a catch-all phrase for a lot of different things. Some companies seem to want to latch onto it in order to be part of the next 'big thing'. Some of these efforts might pan out but I expect many others will fizzle out and go nowhere. It reminds me of when the term 'cloud computing' first came into use and then it was going to be some magical thing unlike anything we have seen before. I rarely use social media and certainly don't get my news/political insight from there.
 

My husband's business partner has been investigating AI just for fun. He got it to write a pretty decent provisional patent application. It took some doing, to get the appropriate info in there (or something like it, I don't pretend to understand) but DH said it was a pretty decent attempt, for a "robot." I am concerned that before long, students are going to start using AI for writing papers, lab reports, etc. What is the point of bothering to become educated if you can get AI to do all your work for you?
 
My husband's business partner has been investigating AI just for fun. He got it to write a pretty decent provisional patent application. It took some doing, to get the appropriate info in there (or something like it, I don't pretend to understand) but DH said it was a pretty decent attempt, for a "robot." I am concerned that before long, students are going to start using AI for writing papers, lab reports, etc. What is the point of bothering to become educated if you can get AI to do all your work for you?

Yes, education will definitely need to change to keep up with it, and new ways to allow students to experiment with using AI BUT also making sure they're testing to ensure they know it without the chatGPT output
 
While I think there would be some amazing applications that can come from this, I am really much more afraid of the possible negatives. I think it will be used to manipulate people. I can see governments using it to create propaganda that can start wars. Propaganda is already hard to weed through to try and discern what is true and what is not. Add this component and I find it terrifying. On a more individual level, social media is already used to bully others. This could add a whole new level to that with false scenarios. I worry it could even make it harder to determine innocence or guilt in crimes. For me personally, the cons do not outweigh the pros-at least the ones I can think of anyway.
:rolleyes1Given that it may essentially become impossible to believe either our eyes or our ears any longer, what possible benefits could be worth that?
 
Yes, education will definitely need to change to keep up with it, and new ways to allow students to experiment with using AI BUT also making sure they're testing to ensure they know it without the chatGPT output
Before long? They're already doing it.
Our daughters high school has students submit papers for review scanning for use of AI technology in writing it. I'm not sure the program they use, but assume it's similar to what we used in the early 00s to scan for plagiarism.
 
I messed around with them a little, but I don't think they're quite "there" yet. You can get some amazing results, BUT you have to provide very specific parameters. By the time you code all of that, you may as well have written it yourself.
 
From a philosophical point of view machines are fast, humans are smart.

While the devices can absorb countless units of info into a matrix to simulate people it would seem to me that humans are as much about what we deselect as we are about what we select. It is easy enough to capture what we select with clicks, it is much more difficult to decipher what is deselected. Sort of like the difference between matter & dark matter. How does one mimic what is not revealed?

Seems to me the most dangerous part of AI realm would be to adopt it as human and to let it make choices because humans cede control thinking a machine knows better, in a human sense of knowing.

It would also be exceedingly dangerous for any human to assume that any humans would actually let that power off it's leash - meaning it would be very valuable for people to believe they were getting advice from an impartial device while bad actor humans were steering the system. Therefore, it is much more likely there would be lies about human involvement & tampering/steering than that any of us would actually get unadulterated machine processing. I would suggest the likelihood of it not being tampered with would approach zero, it is just too valuable.

Also, consider the spectrum of emotional intelligence. To a person whose mind exists in a space with numbed emotional intelligence (biological, drug, disease ect induced) the bar for what is perceived as human might be a lot lower than for a highly functioning EI genius who might pick it off much more easily. This would make the entire process far more dangerous for people with a numbed ability to intuitively vibe things.

I think a ton about weird stuff all the time, welcome to my mind ;)
 
Imagine if AI came to the DIS. We start seeing an AI account post threads of inane questions that fill up the front page. Nah, would never happen….
That one was one of the recent best, Myst!!!! So darn true. That front page, and page 2 also, can get pretty darn short at times, most times.
 
That one was one of the recent best, Myst!!!! So darn true. That front page, and page 2 also, can get pretty darn short at times, most times.
We just had another (presumably) recently in the YouTube thread.

It's already common in social media to see bot accounts. Eventually bot accounts would probably attract bots themselves, then we'd have a bunch of bots carrying on conversations.
 
And social media likes to have LOTS of users (real or imagined) to help drive their stock price and advertising rates. Seems there is little incentive for them to do anything about it.
 
Too funny, Musk blames AI for this change. My translation..........."we have figured out another scheme to charge users more for the 'privilege' of using twitter".................LOL.

1688246988219.png
1688247036287.png
 
I just read that Harvard is going to incorporate AI into the classrooms (CS50, an entry level computer science course). They'll use it to teach coding and support learning, as well as grading. There's no indication how this will be applied to subjective questions, that require human judgement for grading. DH's University is embracing AI and encouraging the students to use it for writing assignments so they can improve their writing and overall knowledge in a subject area, and the faculty is expected to implement and support this. I'm not sure why we should continue to teach and grade, if AI is going to spoon-feed information and then create assignment documents.

I'm about ready to throw in the towel.
 














Save Up to 30% on Rooms at Walt Disney World!

Save up to 30% on rooms at select Disney Resorts Collection hotels when you stay 5 consecutive nights or longer in late summer and early fall. Plus, enjoy other savings for shorter stays.This offer is valid for stays most nights from August 1 to October 11, 2025.
CLICK HERE













DIS Facebook DIS youtube DIS Instagram DIS Pinterest

Back
Top