Tags Posts tagged with "AI"

AI

Image by Alexandra Koch from Pixabay

By Leah S. Dunaief

Leah Dunaief,
Publisher

As more teens learn about artificial intelligence, more are using ChatGPT in doing their schoolwork. According to K-12 Dive, an industry newsletter, between 2023 and 2024, the number doubled. What has also increased is the way in which students can cheat on assignments.

Like every new invention, there are pluses and minuses. Using ChatGPT as an aid can be of help by providing new ways to view information. It can create a metaphor or write a synopsis and offer a different perspective. It could also complete the homework in a false manner that deprives the student of real understanding, much like copying someone else’s notes, even if he or she gets a good grade.

And with so much pressure for good grades, some students may find it easier to cheat, especially in this way that is harder to detect, than to actually learn the new material. Of course, the person they are really cheating is themselves. While AI cheating may offer an academic pathway for short term success, if misused it undermines intellectual growth and also challenges students’ moral and ethical development.

Cheating, of one sort or another, has always existed in academic circles. One way I can recall, when I was in college, was to use Cliff Notes to summarize a plot. These were intended to enable a term paper on Tolstoy’s “War & Peace” or Dickens’ “Bleak House,” for example, without the student having to read the actual thick book. The student may have made it through the class but at what price?

Other forms of cheating included hiring someone to write that term paper for the student, or even hiring another student to take a final. We all knew in school that cheating, in various ways, existed.

So how can cheating be prevented?

The answer is, it probably can’t. But according to the K-12 Dive Newsletter, it can be minimized by creating “a culture of integrity” within which to dissuade cheating.

I can tell you how my college did so in the early 1960s. There was an Honor Board made up of students elected to that position for one year. Anyone accused of cheating or any other improper act could be brought before this jury of peers and either found innocent or, if deemed guilty, appropriately sentenced. Trials, which were few, were held in private, as were verdicts. Innocent until proven guilty was the mindset, and integrity was valued.

That said, I am sure people still cheated without getting caught.

As for catching those misusing ChatGPT, teachers are urged by the Newsletter to read assignments and consider them in light of what they know about each student’s abilities. Testing with pencil and paper in class is revealing. AI use for homework won’t help on a class test.

“Noting the absence of expected concepts or references used in class or the presence of concepts and references not taught in class,” is a giveaway, according to K-12 Dive.

And further advocated in the Newsletter is the idea that students will be less likely to cheat if they understand the moral principles at play, as discussed in the school.

Let’s applaud ChatGPT for what it can do. It can prove to be a helpful tool if used transparently. Students should be taught how.

Person utilizing coding software on a computer. Pixabay photo

By Aramis Khosronejad

With the rise of artificial intelligence and the seemingly ever-changing technological world, the main question coming from educators and parents is how the new generation is going to adapt and thrive in the dawn of this new era. Stony Brook University’s new summer camp aims to prepare them.

Located in the university’s Center of Excellence in Wireless and Information Technology, the program is a collaboration between Stony Brook and Sunrise Technology.

The program has yielded extremely impressive results and received international attention, attracting students from as far as Hong Kong. This largely has to do with the outreach program managed by Rong Zhao, the director of Stony Brook’s CEWIT, which strives to engage students from local high schools on Long Island. Zhao said that the “demonstration” of the self-driving car models “is the biggest attraction.”

The camp is important, according to Zhao, because it shows students that such advanced technology such as self-driving cars and AI isn’t something to fear. “[It isn’t] this mystical, futuristic thing … it’s tangible … [the students] think, ‘Wow, I can code this.’ It is, in the end, the future generation that we’re helping.”

By teaching students to understand that advanced technologies such as AI aren’t something far off into the future but is our current reality, the camp aims to prepare the new generations to adapt to this inevitable future.

According to a report in Forbes by MIT and Boston University, AI will replace as many as 2 million manufacturing workers by 2025. With such rapidly approaching change, preparing the new generation to adapt to this future is paramount. This kind of preparation is exactly what this interactive AI summer camp aims to do, according to Zhao.

Yu Sun, founder and CEO of Sunrise Technology, explained in an interview with TBR News Media how the camp works. It consists of three main activities for students: Lectures where students will listen to a professor speak on the coding process; computer labs where students will be able to apply what they learned from the lectures; and lastly, a project where students will develop and deploy their own self-driving car models.

The program will “give the students an idea of how these self-driving programs work using their own unique design, which also keeps them engaged,” Sun said. She believes that, regardless of whether parents or students are interested in STEM, “AI is such an up-and-coming buzz and parents want students to be exposed to this field.”

“How can we turn this into an educational opportunity which will have a real impact?” Zhao asked. The future is here already, and teaching students how to thrive and adapt to it is essential.

The program spans over two sessions: The first consisted of two weeks from July 8 to 19, and the second session will consist of another two weeks from Aug. 5 to 16. Students from 9th to 12th grade are eligible for the summer camp.

There are very few prerequisites for this program. The second session of the program is still available for any interested high school students.

TBR News Media attended the annual New York Press Association Spring Conference held at Saratoga Springs April 26 and 27. We were privileged to receive many awards and would not have done so without our readers — so, a sincere thank you.

This yearly conference traditionally serves as a platform for press organizations to meet with and learn from seasoned speakers and professionals within the industry, though this year emphasized a specific focus on a newcomer in the industry — artificial intelligence.

The printing press, invented around 1440, revolutionized how we shared information. Now, AI is poised to do the same for local journalism. While some may fear robots taking over the newsroom, AI offers exciting possibilities for strengthening our community’s connection to local stories.

Imagine a future where AI combs through public records, uncovering hidden trends in traffic data or school board meetings. Journalists, freed from such tedious tasks, can delve deeper into these trends, crafting investigative pieces that hold power to real change. AI can also translate interviews, allowing us to share the stories of a more diverse community.

Local news thrives on hyper-personalization. AI can analyze reader preferences, tailoring news feeds to your interests. This ensures you see not just the headlines, but the stories that truly matter to you. Imagine getting in-depth reporting on the school board race or up-to-the-minute updates on road closures that may affect your usual commute.

Of course, AI isn’t a magic bullet. Ethical considerations abound. We need to ensure AI doesn’t become an echo chamber, reinforcing existing biases. Journalists will remain the cornerstone, using AI as a tool to amplify their human touch — the ability to ask tough questions, identify nuance and connect with the community on a deeper level.

The future of local news isn’t about robots replacing reporters. It’s about AI empowering journalists to tell richer, more relevant stories that connect with you. It’s about ensuring a future where local news remains a vibrant part of the community, informing, engaging and holding power to account. This is an exciting transformation, and together we can ensure AI strengthens, not diminishes, the essential role local journalism plays in our lives.

And we’ll continue to strive for more NYPA awards next year.

Change is not just a distant possibility, it’s a force shaping the way we live, work and connect with one another today. 

From artificial intelligence and machine learning to environmental and clean energy initiatives, the landscape of technology is evolving at an unprecedented pace, presenting us with both challenges and opportunities. 

In recent news we have seen the incorporation of AI in the classroom, workforce and in industry. We have seen integration of technology on a local level as in the case of the CBORD Patient app for meal ordering at Stony Brook University Hospital. We even see technology connecting one another in civics and other community gatherings with the use of platforms such as Zoom. We have the opportunity to chat in the many community-run online forums accessed via Facebook and other platforms.

We have seen proposals for clean energy initiatives such as the Sunrise Wind project or the governor’s proposal for electric school buses. We have also seen investments and grants given to institutions such as Brookhaven National Lab and Stony Brook University to help further innovation and creation. 

While some may view these changes with apprehension or skepticism, we must recognize that the march of progress is unavoidable. Rather than resisting the tide of innovation, let us embrace it as a means to propel our community forward into a brighter, more prosperous future.

One of the most promising aspects of integrating emerging technologies into our community is the potential to enhance efficiency and effectiveness across various sectors. Whether it’s optimizing transportation systems through the use of predictive analytics or improving access to health care services through telemedicine and patient assistive applications, technology has the power to revolutionize the way we deliver essential services and meet the needs of our residents.

Moreover, the integration of emerging technologies can foster economic growth and innovation, attracting new businesses, entrepreneurs and investment opportunities to our community. 

However, as we embark on this journey of technological integration, it’s essential that we do so with careful consideration for the ethical, social and environmental implications of our actions. 

As we embrace emerging technologies, let us not lose sight of the importance of human connection and community cohesion. While technology has the power to connect us in unprecedented ways, it can never replace the warmth of a face-to-face conversation or the sense of belonging that comes from being part of a close-knit community. 

This graphic summarizes shifts in public attitudes about AI, according to the Stony Brook-led survey. Image by Jason Jones

A Stony Brook University study suggests that on average, U.S. adults have gained confidence in the capabilities of AI and grown increasingly opposed to extending human rights to advanced AI systems.

In 2021, two Stony Brook University researchers – Jason Jones, PhD, Associate Professor in the Department of Sociology, and Steven Skiena, PhD, Distinguished Teaching Professor in the Department of Computer Science – began conducting a survey study on attitudes toward artificial intelligence (AI) among American adults. Some of their recent findings, published in the journal Seeds of Science, show a shift in Americans’ views on AI.

The researchers compared data collected from random, representative samples in 2021 and 2023 to determine whether public attitudes toward AI have changed amid recent technological developments – most notably the launch of OpenAI’s ChatGPT chatbot in late 2022. The new work builds on previous research into how AI is perceived in society, by way of the Jones-Skiena Public Opinion of Artificial Intelligence Dashboard and similar survey studies conducted with varying demographics.

The new study sampled two unique groups of nearly 500 Americans ages 18 and above, one of which was surveyed in March 2021 and the other in April 2023. Participants shared their opinions on the achievability of constructing a computer system able to perform any intellectual task a human is capable of, whether such a system should be built at all, and/or if that system – referred to as Artificial General Intelligence (AGI) – should be afforded the same rights as a human being.

Google Surveys was originally used as the platform for this research due to its capability of delivering random, representative samples.

“What we truly wanted to know was the distribution and average of public opinion in the U.S. population,” says Jones, co-author and also a member of Stony Brook’s Institute for Advanced Computational Science (IACS). “A random, representative sample is the gold standard for estimating that in survey research. Google shut down their Google Surveys product in late 2022, so we used another platform called Prolific to do the same thing for the second sample.”

Once the samples were collated, a statistically significant change in opinion was revealed regarding whether an AGI system is possible to build and whether it should have the same rights as a human.

In 2023, American adults more strongly believed in the achievability of AGI, yet were more adamantly against affording such systems the same rights as human beings. There was no statistically significant change in public opinion on whether AGI should be built, which was weakly favored across both samples.

Jones and Skiena stress that more studies must be conducted to better understand public perceptions of artificialintelligence as the technology continues to grow in societal relevance.

They will repeat the survey this spring with the same methods used in 2023 with the hope of  building further on their findings.

Photo by David Ackerman

This week, TBR News Media has embarked upon a pilot project we’re calling News Flash.

It’s a first-of-its-kind journalistic endeavor to integrate artificial intelligence technologies into our newsroom operation. Using ChatGPT, a popular chatbot developed by OpenAI that launched in November 2022, we believe News Flash can aid us in our mission to inform our North Shore readership.

The concept here is simple. We are feeding some of our original content into ChatGPT, directing the chatbot to extract the most interesting or insightful news nuggets within a given article.

While AI generates the bullet points, we assure our readers that our staff retains complete editorial control over the end product. We are committed to subjecting AI-produced content to the same rigorous standards we use for content by human writers. 

There are several motivations behind this effort. We are acutely aware and deeply concerned our digital technologies have diminished our attention spans and impaired our faculties for processing large chunks of information. Reading proficiency scores in the U.S. are declining, and in an electoral system demanding a well-informed citizenry, this rings of deep trouble for our republic.

Presenting noteworthy or insightful points up front may make one more inclined to read the entire article. But even if a reader opts not to read the article, News Flash will have delivered some of the necessary material, informing even the nonreader.

There is also a broader philosophical objective behind this project. Artificial intelligence may be the defining technological innovation of our lifetimes. Our staff is in uncharted waters, with no precedents to guide us on properly synchronizing AI and local journalism.

With the awesome power of AI comes an equally awesome responsibility to harness its power appropriately. We believe trained journalists must guide AI, using this tool to enhance and augment the reader experience. Without strict human oversight, we risk irreversible disruption to a vital American institution, with the potential ramifications still unknown.

Scanning the local media landscape, we see alarming trends all around us. Each year, more local news outlets shutter. Others consolidate under large conglomerates. And most disturbingly, more and more Americans live in news deserts, or places without a local newspaper. These are trying times that should trouble journalists and citizens alike.

Without the local press, we naturally gravitate to larger, national media outlets whose contents are increasingly polarized and politically charged. Reading only about higher levels of government, whose centers of power are far away from Long Island and interests often unaligned with our own, we become disillusioned and disconnected from the democratic process.

For the first time ever, local journalists have a powerful tool to help advance their mission to inform democracy. If used properly, AI can help counteract these downward trajectories in our industry, restoring local journalism to its central place in American life.

At TBR News Media, we pledge to use AI technology responsibly. Like generations of pioneers before us, let us plunge forth into the Great Unknown. May this adventure prove fulfilling for both local journalism and democracy — and our readers.

Pixabay photo

By Leah S. Dunaief

Leah Dunaief

You’ve heard of ChatGPT, yes? So had a lawyer in Brooklyn from his college-aged children. While the lawyer has been in practice for 30 years, he had no prior experience with the Open AI chatbot. But when he was hired in a lawsuit against the airline Avianca and went into Federal District Court with his legal brief filled with judicial opinions and citations, poor guy, he made history.

All the evidence he was bringing to the case was generated by ChatGPT. All of it was false: creative writing generated by the bot.

Here is the story, as told in The New York Times Business Section on June 9. A passenger, who had sued the airline for injury to his knee by a metal serving cart as it was rolled down the aisle in 2019 on a flight from El Salvador to New York, was advised that the lawsuit should be dismissed because the statute of limitations had expired. His lawyer, however, responded with the infamous 10-page brief offering more than half a dozen court decisions supporting their argument that the case should be allowed to proceed. There was only one problem: None of the cases cited in the brief could be found.

The decisions, although they named previous lawsuits against Delta Airlines, Korean Airlines and China Southern Airlines, and offered realistic names of supposedly injured passengers, were not real.

“I heard about this new site, which I falsely assumed was, like, a super search engine,” lamely offered the embarrassed attorney.

“Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet,” explained The NYT.

Now the lawyer stands in peril of being sanctioned by the court. He declared that he had asked questions of the bot and had gotten in response genuine case citations, which he had included in his brief. He also printed out and included his dialogue with ChatGPT, which ultimately at the end, offered him the words, “I hope that helps.”

But the lawyer had done nothing further to ensure that those cases existed. They seemed professional enough to fool the professional.

Now the tech world, lawyers and judges are fixated on this threat to their profession. And warnings exist of that threat being carried over to all of humanity with erroneous generative AI.

But this is not an entirely ominous story.

Researchers at Open AI and the University of Pennsylvania have concluded that 80% of the U.S. workforce could see an effect on at least 10% of their tasks, according to The NYT. That means that some 300 million full-time jobs could be affected by AI. But is that all bad? Could AI become a helpful tool?

By using AI as an assistant, humans can focus on the judgment aspect of data-driven decision-making, checking and interpreting the information provided by the bot. Humans provide judgment over what is provided by a bot.

Ironically, the lawyer’s children probably passed their ChatGPT-fueled courses with good grades. Part of that is the way we teach students, offering them tons of details to memorize and regurgitate on tests or in term papers. The lawyer should have judged his ChatGPT-supplied data. Future lawyers now know they must. 

As for education, emphasis should go beyond “what” and even “so what” to “what’s next.”  Learning should be about once having facts or history, then how to think, to analyze, how to interpret and take the next steps. Can chatbots do that? Perhaps in an elementary way they now can. Someday they will in a larger context. And that poses a threat to the survival of humanity, because machines will no longer need us.

METRO image

By Daniel Dunaief

Daniel Dunaief

I’m really writing this. Or am I?

Now that I’ve seen artificial intelligence in action, I know that the system, such as it is, can write impressive pieces in much shorter time than it takes me to write a column or even this sentence.

And yet, I don’t want a machine to write for me or to reach out to you. I prefer the letter by letter, word by word approach I take and would like to think I earn the smile, frown or anything in between I put on your face as a result of the thinking and living I’ve done.

However, I do see opportunities for AI to become the equivalent of a personal assistant, taking care of needed conveniences and reducing inconveniences. For conveniences, how about if AI did the following:

Grocery shopping: I’m sure I get similar foods each week. Maybe my AI system could not only buy the necessary and desired food items, but perhaps it could reduce the ones that are unhealthy or offer new recipes that satisfy my food preferences.

Dishes: I’m not looking for a robot akin to “The Jetsons,” but would love to have a system that removed the dirt and food from my dishes, put them in the dishwasher, washed them and then put them away. An enhanced system also might notice when a dish wasn’t clean and would give that dish another wash.

Laundry: Okay, I’ll admit it. I enjoy folding warm laundry, particularly in the winter, when my cold hands are starting to crack from being dry. Still, it would save time and energy to have a laundry system that washed my clothes, folded them and put them away, preferably so that I could see and access my preferred clothing.

Pharmacy: I know this is kind of dangerous when it comes to prescriptions, but it’d be helpful to have a system that replenished basic, over-the-counter supplies, such as band-aids. Perhaps it could also pick out new birthday and greeting cards that expressed particular sentiments in funny yet tasteful ways for friends and family who are celebrating milestone birthdays or are living through other joyful or challenging times.

For the inconveniences, an AI system would help by:

Staying on hold: At some point, we’ve all waited endlessly on hold for some company to pick up the phone to speak to us about changing our flights, scheduling a special dinner reservation or speaking with someone about the unusual noise our car makes. Those “on hold” calls, with their incessant chatter or their nonstop hold music, can be exasperating. An AI system that waited patiently, without complaint or frustration and that handed me the phone the moment a person picked up the call, would be a huge plus.

Optimize necessary updates: Car inspections, annual physicals, oil changes, and trips to the vet can and do go on a calendar. Still, it’d be helpful to have an AI system that recognizes these regular needs and coordinates an optimal time (given my schedule and the time it’ll take to travel to and from these events) to ensure I don’t miss an appointment and to minimize the effort necessary.

Send reminders to our children: Life is full of balances, right? Too much or too little of something is unhealthy. These days, we sometimes have to write or text our kids several times before we get to speak with them live. An AI system might send them a casual, but loving, reminder that their not-so-casual but loving parents would like to speak with them live.

Provide a test audience: In our heads, we have the impulse to share something funny, daring or challenging, like, “hey, did you get dressed in the dark” or “wow, it must be laundry day.” Sure, that might be funny, but an AI system designed to appreciate humor in the moment — and to have an awareness of our audience — might protect us from ourselves. Funny can be good and endearing, but can also annoy.