Tags Posts tagged with "AI"

AI

This graphic summarizes shifts in public attitudes about AI, according to the Stony Brook-led survey. Image by Jason Jones

A Stony Brook University study suggests that on average, U.S. adults have gained confidence in the capabilities of AI and grown increasingly opposed to extending human rights to advanced AI systems.

In 2021, two Stony Brook University researchers – Jason Jones, PhD, Associate Professor in the Department of Sociology, and Steven Skiena, PhD, Distinguished Teaching Professor in the Department of Computer Science – began conducting a survey study on attitudes toward artificial intelligence (AI) among American adults. Some of their recent findings, published in the journal Seeds of Science, show a shift in Americans’ views on AI.

The researchers compared data collected from random, representative samples in 2021 and 2023 to determine whether public attitudes toward AI have changed amid recent technological developments – most notably the launch of OpenAI’s ChatGPT chatbot in late 2022. The new work builds on previous research into how AI is perceived in society, by way of the Jones-Skiena Public Opinion of Artificial Intelligence Dashboard and similar survey studies conducted with varying demographics.

The new study sampled two unique groups of nearly 500 Americans ages 18 and above, one of which was surveyed in March 2021 and the other in April 2023. Participants shared their opinions on the achievability of constructing a computer system able to perform any intellectual task a human is capable of, whether such a system should be built at all, and/or if that system – referred to as Artificial General Intelligence (AGI) – should be afforded the same rights as a human being.

Google Surveys was originally used as the platform for this research due to its capability of delivering random, representative samples.

“What we truly wanted to know was the distribution and average of public opinion in the U.S. population,” says Jones, co-author and also a member of Stony Brook’s Institute for Advanced Computational Science (IACS). “A random, representative sample is the gold standard for estimating that in survey research. Google shut down their Google Surveys product in late 2022, so we used another platform called Prolific to do the same thing for the second sample.”

Once the samples were collated, a statistically significant change in opinion was revealed regarding whether an AGI system is possible to build and whether it should have the same rights as a human.

In 2023, American adults more strongly believed in the achievability of AGI, yet were more adamantly against affording such systems the same rights as human beings. There was no statistically significant change in public opinion on whether AGI should be built, which was weakly favored across both samples.

Jones and Skiena stress that more studies must be conducted to better understand public perceptions of artificialintelligence as the technology continues to grow in societal relevance.

They will repeat the survey this spring with the same methods used in 2023 with the hope of  building further on their findings.

Photo by David Ackerman

This week, TBR News Media has embarked upon a pilot project we’re calling News Flash.

It’s a first-of-its-kind journalistic endeavor to integrate artificial intelligence technologies into our newsroom operation. Using ChatGPT, a popular chatbot developed by OpenAI that launched in November 2022, we believe News Flash can aid us in our mission to inform our North Shore readership.

The concept here is simple. We are feeding some of our original content into ChatGPT, directing the chatbot to extract the most interesting or insightful news nuggets within a given article.

While AI generates the bullet points, we assure our readers that our staff retains complete editorial control over the end product. We are committed to subjecting AI-produced content to the same rigorous standards we use for content by human writers. 

There are several motivations behind this effort. We are acutely aware and deeply concerned our digital technologies have diminished our attention spans and impaired our faculties for processing large chunks of information. Reading proficiency scores in the U.S. are declining, and in an electoral system demanding a well-informed citizenry, this rings of deep trouble for our republic.

Presenting noteworthy or insightful points up front may make one more inclined to read the entire article. But even if a reader opts not to read the article, News Flash will have delivered some of the necessary material, informing even the nonreader.

There is also a broader philosophical objective behind this project. Artificial intelligence may be the defining technological innovation of our lifetimes. Our staff is in uncharted waters, with no precedents to guide us on properly synchronizing AI and local journalism.

With the awesome power of AI comes an equally awesome responsibility to harness its power appropriately. We believe trained journalists must guide AI, using this tool to enhance and augment the reader experience. Without strict human oversight, we risk irreversible disruption to a vital American institution, with the potential ramifications still unknown.

Scanning the local media landscape, we see alarming trends all around us. Each year, more local news outlets shutter. Others consolidate under large conglomerates. And most disturbingly, more and more Americans live in news deserts, or places without a local newspaper. These are trying times that should trouble journalists and citizens alike.

Without the local press, we naturally gravitate to larger, national media outlets whose contents are increasingly polarized and politically charged. Reading only about higher levels of government, whose centers of power are far away from Long Island and interests often unaligned with our own, we become disillusioned and disconnected from the democratic process.

For the first time ever, local journalists have a powerful tool to help advance their mission to inform democracy. If used properly, AI can help counteract these downward trajectories in our industry, restoring local journalism to its central place in American life.

At TBR News Media, we pledge to use AI technology responsibly. Like generations of pioneers before us, let us plunge forth into the Great Unknown. May this adventure prove fulfilling for both local journalism and democracy — and our readers.

Pixabay photo

By Leah S. Dunaief

Leah Dunaief

You’ve heard of ChatGPT, yes? So had a lawyer in Brooklyn from his college-aged children. While the lawyer has been in practice for 30 years, he had no prior experience with the Open AI chatbot. But when he was hired in a lawsuit against the airline Avianca and went into Federal District Court with his legal brief filled with judicial opinions and citations, poor guy, he made history.

All the evidence he was bringing to the case was generated by ChatGPT. All of it was false: creative writing generated by the bot.

Here is the story, as told in The New York Times Business Section on June 9. A passenger, who had sued the airline for injury to his knee by a metal serving cart as it was rolled down the aisle in 2019 on a flight from El Salvador to New York, was advised that the lawsuit should be dismissed because the statute of limitations had expired. His lawyer, however, responded with the infamous 10-page brief offering more than half a dozen court decisions supporting their argument that the case should be allowed to proceed. There was only one problem: None of the cases cited in the brief could be found.

The decisions, although they named previous lawsuits against Delta Airlines, Korean Airlines and China Southern Airlines, and offered realistic names of supposedly injured passengers, were not real.

“I heard about this new site, which I falsely assumed was, like, a super search engine,” lamely offered the embarrassed attorney.

“Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet,” explained The NYT.

Now the lawyer stands in peril of being sanctioned by the court. He declared that he had asked questions of the bot and had gotten in response genuine case citations, which he had included in his brief. He also printed out and included his dialogue with ChatGPT, which ultimately at the end, offered him the words, “I hope that helps.”

But the lawyer had done nothing further to ensure that those cases existed. They seemed professional enough to fool the professional.

Now the tech world, lawyers and judges are fixated on this threat to their profession. And warnings exist of that threat being carried over to all of humanity with erroneous generative AI.

But this is not an entirely ominous story.

Researchers at Open AI and the University of Pennsylvania have concluded that 80% of the U.S. workforce could see an effect on at least 10% of their tasks, according to The NYT. That means that some 300 million full-time jobs could be affected by AI. But is that all bad? Could AI become a helpful tool?

By using AI as an assistant, humans can focus on the judgment aspect of data-driven decision-making, checking and interpreting the information provided by the bot. Humans provide judgment over what is provided by a bot.

Ironically, the lawyer’s children probably passed their ChatGPT-fueled courses with good grades. Part of that is the way we teach students, offering them tons of details to memorize and regurgitate on tests or in term papers. The lawyer should have judged his ChatGPT-supplied data. Future lawyers now know they must. 

As for education, emphasis should go beyond “what” and even “so what” to “what’s next.”  Learning should be about once having facts or history, then how to think, to analyze, how to interpret and take the next steps. Can chatbots do that? Perhaps in an elementary way they now can. Someday they will in a larger context. And that poses a threat to the survival of humanity, because machines will no longer need us.

METRO image

By Daniel Dunaief

Daniel Dunaief

I’m really writing this. Or am I?

Now that I’ve seen artificial intelligence in action, I know that the system, such as it is, can write impressive pieces in much shorter time than it takes me to write a column or even this sentence.

And yet, I don’t want a machine to write for me or to reach out to you. I prefer the letter by letter, word by word approach I take and would like to think I earn the smile, frown or anything in between I put on your face as a result of the thinking and living I’ve done.

However, I do see opportunities for AI to become the equivalent of a personal assistant, taking care of needed conveniences and reducing inconveniences. For conveniences, how about if AI did the following:

Grocery shopping: I’m sure I get similar foods each week. Maybe my AI system could not only buy the necessary and desired food items, but perhaps it could reduce the ones that are unhealthy or offer new recipes that satisfy my food preferences.

Dishes: I’m not looking for a robot akin to “The Jetsons,” but would love to have a system that removed the dirt and food from my dishes, put them in the dishwasher, washed them and then put them away. An enhanced system also might notice when a dish wasn’t clean and would give that dish another wash.

Laundry: Okay, I’ll admit it. I enjoy folding warm laundry, particularly in the winter, when my cold hands are starting to crack from being dry. Still, it would save time and energy to have a laundry system that washed my clothes, folded them and put them away, preferably so that I could see and access my preferred clothing.

Pharmacy: I know this is kind of dangerous when it comes to prescriptions, but it’d be helpful to have a system that replenished basic, over-the-counter supplies, such as band-aids. Perhaps it could also pick out new birthday and greeting cards that expressed particular sentiments in funny yet tasteful ways for friends and family who are celebrating milestone birthdays or are living through other joyful or challenging times.

For the inconveniences, an AI system would help by:

Staying on hold: At some point, we’ve all waited endlessly on hold for some company to pick up the phone to speak to us about changing our flights, scheduling a special dinner reservation or speaking with someone about the unusual noise our car makes. Those “on hold” calls, with their incessant chatter or their nonstop hold music, can be exasperating. An AI system that waited patiently, without complaint or frustration and that handed me the phone the moment a person picked up the call, would be a huge plus.

Optimize necessary updates: Car inspections, annual physicals, oil changes, and trips to the vet can and do go on a calendar. Still, it’d be helpful to have an AI system that recognizes these regular needs and coordinates an optimal time (given my schedule and the time it’ll take to travel to and from these events) to ensure I don’t miss an appointment and to minimize the effort necessary.

Send reminders to our children: Life is full of balances, right? Too much or too little of something is unhealthy. These days, we sometimes have to write or text our kids several times before we get to speak with them live. An AI system might send them a casual, but loving, reminder that their not-so-casual but loving parents would like to speak with them live.

Provide a test audience: In our heads, we have the impulse to share something funny, daring or challenging, like, “hey, did you get dressed in the dark” or “wow, it must be laundry day.” Sure, that might be funny, but an AI system designed to appreciate humor in the moment — and to have an awareness of our audience — might protect us from ourselves. Funny can be good and endearing, but can also annoy.