Tags Posts tagged with "artificial intelligence"

artificial intelligence

Change is not just a distant possibility, it’s a force shaping the way we live, work and connect with one another today. 

From artificial intelligence and machine learning to environmental and clean energy initiatives, the landscape of technology is evolving at an unprecedented pace, presenting us with both challenges and opportunities. 

In recent news we have seen the incorporation of AI in the classroom, workforce and in industry. We have seen integration of technology on a local level as in the case of the CBORD Patient app for meal ordering at Stony Brook University Hospital. We even see technology connecting one another in civics and other community gatherings with the use of platforms such as Zoom. We have the opportunity to chat in the many community-run online forums accessed via Facebook and other platforms.

We have seen proposals for clean energy initiatives such as the Sunrise Wind project or the governor’s proposal for electric school buses. We have also seen investments and grants given to institutions such as Brookhaven National Lab and Stony Brook University to help further innovation and creation. 

While some may view these changes with apprehension or skepticism, we must recognize that the march of progress is unavoidable. Rather than resisting the tide of innovation, let us embrace it as a means to propel our community forward into a brighter, more prosperous future.

One of the most promising aspects of integrating emerging technologies into our community is the potential to enhance efficiency and effectiveness across various sectors. Whether it’s optimizing transportation systems through the use of predictive analytics or improving access to health care services through telemedicine and patient assistive applications, technology has the power to revolutionize the way we deliver essential services and meet the needs of our residents.

Moreover, the integration of emerging technologies can foster economic growth and innovation, attracting new businesses, entrepreneurs and investment opportunities to our community. 

However, as we embark on this journey of technological integration, it’s essential that we do so with careful consideration for the ethical, social and environmental implications of our actions. 

As we embrace emerging technologies, let us not lose sight of the importance of human connection and community cohesion. While technology has the power to connect us in unprecedented ways, it can never replace the warmth of a face-to-face conversation or the sense of belonging that comes from being part of a close-knit community. 

Pixabay photo

By Aidan Johnson

“Does AI belong in the classroom?,” the prompt read for ChatGPT, a chatbot that was developed by the company OpenAI.

“The question of whether AI belongs in the classroom is a complex one that depends on various factors, including the goals of education, the needs of students and the capabilities of AI technology,” it responded.

Artificial intelligence continues to make headlines, whether it’s due to concerns of replacing actors and writers, new advancements in the ability to make artificially generated videos or worries of misinformation spread by it. However, “the question of whether AI belongs in the classroom” is one that has been on the minds of educators and students.

Some teachers have embraced the use of AI. In an interview with PBS, a high school English teacher in New York City described how he uses AI to cut down on the amount of time it takes to provide feedback on written assignments from students, allowing them to learn from their mistakes much quicker than if he were to solely grade their longer assignments.

Thomas Grochowski, an English professor at St. Joseph’s University, New York, has incorporated AI into his classes, but to a rather minimal degree.

“I usually announce it into the space, where there are very small extra credit assignments where students are encouraged to give the same prompt they were given for a small one-point assignment into ChatGPT, and to write a small piece reflecting on what the robot wrote as opposed to what the student has written,” he explained.

Grochowski added that he makes the assignments optional so students do not have to give information to the site if they do not want to, since some students “are anxious about becoming too familiar with AI.”

“But, it also makes them aware that I’m paying attention,” he elaborated.

While the use of AI is prohibited outside of the optional assignments, that has not stopped students from trying. However, plagiarism-detecting software such as Turnitin has the ability to detect the use of AI, albeit with imperfect results, as it can also flag the use of more acceptable programs such as Grammarly, an AI-typing assistant that can review aspects in text such as spelling, grammar and clarity.

“I think if it’s going to have a place in the classroom, it’s going to be a result of figuring out where that tool will have utility for us,” said William Phillips, associate chair of the Journalism and New Media Studies Department at St. Joseph’s University.

Phillips described how he has seen students use AI in legitimate ways, such as creating test questions to help them study, or how teachers could use it to help construct lesson plans.

“One thing that has struck me as I’ve learned about AI is the concept of alignment, [which is] making sure that there is some human overseeing the automated process that the AI is involved in to make sure that it’s not going off the rails,” he said.

Phillips cited the hypothetical scenario of the paper clip problem, a theory hypothesized by philosopher Nick Bostrom, in which if an AI is told to make as many paper clips as possible, it would start taking metal from everything, including cars, houses and infrastructure, in order to maximize the number of paper clips.

While the idea of paper clips leading to a dystopian future may seem very unlikely, Phillips stressed the broader idea of needing human oversight “so that the values and objectives of the human societies are aligned with what this new serving technology is capable of.”

Renee Emin, a school psychologist, stressed the importance of finding a balance between AI and humans. While it can be good for children academically, she believes that it is important to pay attention to the impact it has on them socially.

“I think of my autistic students who I work with, who are constantly working to socialize and be able to make a friend and connect to others, and they so easily want their laptops, their iPads, their Chromebooks, because it’s more comfortable. And there’s nothing wrong with that — give them their time to have it,” Emin said.

“But if you start relying solely on AI and technology … there’s a whole connection component that gets completely lost for the children,” she added.

Artificial intelligence is continuing to advance. One way or another, it appears it will be a mainstay in human society and has the potential to impact many different sectors of everyday life.

ChatGPT has the final word: “In summary, while AI can offer significant benefits in terms of personalized learning, teacher support, accessibility and digital literacy, its integration into the classroom should be done thoughtfully, with careful consideration of ethical implications and a focus on enhancing, rather than replacing, human interaction and pedagogy.”

Tom Cassidy with his late father, Hugh 'Joe' Cassidy. Photo by Jonathan Spier

By Thomas M. Cassidy

Thomas M. Cassidy

Artificial Intelligence (AI) will cost many people their jobs. But some occupations desperately needed by a rapidly aging population cannot be replaced by computers or machines. For example, nurse assistants in hospitals and nursing homes.

Research conducted by Goldman Sachs estimates that 25% of current work tasks could be automated by Artificial Intelligence (AI). Unlike prior technological advances that replaced workers in labor intensive occupations, this time “it’s the higher-paying jobs where a college education and analytical skills can be a plus that have a high level of exposure to AI,” according to The Pew Research Center.

During my twenty-year career as an investigator for the New York State Attorney General’s Office, I conducted many investigations of potential patient abuse in nursing homes and other health facilities. I had the privilege of meeting hundreds of nursing assistants. Most were dedicated, knowledgeable and compassionate, but a few were not. Nursing aides dress, bathe, toilet and ambulate patients among many other services. Sometimes they also interact with families, which can be a difficult task. Let me explain:

I was assigned to investigate a possible case of patient abuse at a nursing home. An elderly woman with a doctor’s order for a two-person transfer was helped from her bed for a bathroom trip by only one nurse aide. The elderly woman fell and fractured her hip. The nursing assistant was immediately suspended pending an investigation. My assignment was to investigate this incident as a possible crime. Here’s what happened:

The nursing home patient had a visit from her daughter. Mom told her daughter to help her get out of bed and walk her to the bathroom. The daughter obeyed and helped mom get out of bed. The daughter tried to hold her up, but mom was weak and started to slip. The daughter screamed for help. A nursing assistant rushed to help the falling patient, but it was too late. Mom fell and fractured her hip. There was no crime. The nursing assistant returned to work the next day.

Fast forward twenty years. My father, a World War II combat veteran and a decorated NYPD Detective Commander, fractured his hip at age 80. I visited him at the Long Island State Veterans Nursing Home in Stony Brook. He was alone in his room. He says, “Tom, help me get to the bathroom.” I say, “Dad, let me get an aide to help you.” He says, “YOUR MY SON, just do this for me. I don’t want anyone else to help.” I told him about the elderly woman who fractured her hip when her daughter tried to help her. He said, okay, go get someone to help. If not for my experience as an investigator, I might have tried to help my father. I was taught “To Honor Thy Father and Thy Mother.” But instead, two aides moved my dad safely to the bathroom and back into his bed. Nine months later he walked out of the nursing home to live at home with my mother. 

Not every resident of a nursing home is elderly, but most are. In the United States today, one in every six Americans is age 65 or older. That number will increase dramatically in the next six years to 20% of the population or 70 million older Americans. Incredibly, nursing homes are closing, instead of opening. 

The American Health Care Association reports that since 2020 almost 600 nursing homes have closed, and more than half of nursing homes limit new admissions due to staffing shortages. As a result, there is a shortfall of hospital beds nationwide because displaced nursing home patients remain in hospital beds until they can be safely transferred home or to a care facility.

The Massachusetts Hospital Association reports that one out of every seven medical-surgical beds are unavailable due to patients remaining in the hospital when they no longer need hospital care. Keep in mind that hospitals are required by federal law to provide emergency care, stabilize patients, and discharge patients to a safe environment.

The Bureau of Labor Statistics reports that nursing assistants have one of the highest rates of injuries and illnesses because they frequently move patients and perform other physically demanding tasks. For these, and many other tasks, nurse assistants are paid a median wage of less than $18 per hour; not even close to a salary that is in line with the responsibilities of their job. Small wonder that a survey by the American Health Care Association found that one of the biggest obstacles for hiring new staff in nursing homes is a lack of interested candidates.

Reversing the hemorrhage of nursing home closures requires leaders with Natural Intelligence (NI). It benefits all generations of Americans when hospitals fulfill their mission for acute care and not operate as quasi-nursing homes. After all, languishing in a crowded emergency room “Can Be Hazardous To Your Health!”

Thomas M. Cassidy is the creator of the TV series, Manhattan South, which is in development. (ktpgproductions.com)

This graphic summarizes shifts in public attitudes about AI, according to the Stony Brook-led survey. Image by Jason Jones

A Stony Brook University study suggests that on average, U.S. adults have gained confidence in the capabilities of AI and grown increasingly opposed to extending human rights to advanced AI systems.

In 2021, two Stony Brook University researchers – Jason Jones, PhD, Associate Professor in the Department of Sociology, and Steven Skiena, PhD, Distinguished Teaching Professor in the Department of Computer Science – began conducting a survey study on attitudes toward artificial intelligence (AI) among American adults. Some of their recent findings, published in the journal Seeds of Science, show a shift in Americans’ views on AI.

The researchers compared data collected from random, representative samples in 2021 and 2023 to determine whether public attitudes toward AI have changed amid recent technological developments – most notably the launch of OpenAI’s ChatGPT chatbot in late 2022. The new work builds on previous research into how AI is perceived in society, by way of the Jones-Skiena Public Opinion of Artificial Intelligence Dashboard and similar survey studies conducted with varying demographics.

The new study sampled two unique groups of nearly 500 Americans ages 18 and above, one of which was surveyed in March 2021 and the other in April 2023. Participants shared their opinions on the achievability of constructing a computer system able to perform any intellectual task a human is capable of, whether such a system should be built at all, and/or if that system – referred to as Artificial General Intelligence (AGI) – should be afforded the same rights as a human being.

Google Surveys was originally used as the platform for this research due to its capability of delivering random, representative samples.

“What we truly wanted to know was the distribution and average of public opinion in the U.S. population,” says Jones, co-author and also a member of Stony Brook’s Institute for Advanced Computational Science (IACS). “A random, representative sample is the gold standard for estimating that in survey research. Google shut down their Google Surveys product in late 2022, so we used another platform called Prolific to do the same thing for the second sample.”

Once the samples were collated, a statistically significant change in opinion was revealed regarding whether an AGI system is possible to build and whether it should have the same rights as a human.

In 2023, American adults more strongly believed in the achievability of AGI, yet were more adamantly against affording such systems the same rights as human beings. There was no statistically significant change in public opinion on whether AGI should be built, which was weakly favored across both samples.

Jones and Skiena stress that more studies must be conducted to better understand public perceptions of artificialintelligence as the technology continues to grow in societal relevance.

They will repeat the survey this spring with the same methods used in 2023 with the hope of  building further on their findings.

Stony Brook University admissions office where about 10,000 students applied through the school’s first early action program. Photo courtesy Stony Brook University

By Daniel Dunaief

For Stony Brook University, 2024 will be the year of more, as in more college counselors, more classes, more study abroad opportunities, more artificial intelligence and more faculty.

The downstate flagship university, which is a member of the Association of American Universities and has been climbing the rankings of colleges from US News and World Reports, plans to address several growing needs.

“We have invested heavily in new advisors,” said Carl Lejuez, executive vice president and provost at Stony Brook, in a wide ranging interview. These advisors will be coming on board throughout the semester.

With additional support from the state and a clear focus on providing constructive guidance, the university is working to reduce the number of students each advisor has, enabling counselors to “focus on the students they are serving,” Lejuez said.

Advisors will help students work towards graduation and will hand off those students to an engaged career center.

At the same time, Stony Brook is expanding its global footprint. Lejuez said study abroad options were already “strong” in Europe, while the university is developing additional opportunities in Asia and Africa.

The university prioritizes making study abroad as affordable as possible, offering several scholarships from the office of global affairs and through individual departments.

Students aren’t always aware that “they can study abroad in any SBU-sponsored program for a semester and keep all of their existing federal aid and scholarships and in many cases the full cost of that semester abroad is comparable and sometimes even less expensive” than what the student would spend on Long Island, Lejuez explained in an email.

Stony Brook University Executive Vice President and Provost Carl Lejuez. Photo courtesy Conor Harrigan

As for artificial intelligence, Stony Brook plans to expand on existing work in the realm of teaching, mentoring, research and community outreach.

In efforts sponsored by the Center for Excellence in Learning and the Library, the university is holding multiple training sessions for faculty to discuss how they approach AI in their classrooms.

The library opened an AI Lab that will enable students to experiment, innovate and work on AI projects, Lejuez said. The library plans to hire several new librarians with expertise in AI, machine learning and innovation.

The library is training students on the ethical use of AI and will focus on non-STEM disciplines to help students in the arts, humanities and social sciences.

Artificial intelligence “has its strengths and weaknesses,” said Lejuez. “We are not shying away from it.”

As for the community, the hope is that Stony Brook will use the semester to develop plans for kindergarten through 12th grade and then launch the expansion later this spring.

Additional classes

Lejuez acknowledged that class capacity created challenges in the past.

Stony Brook is using predictive analysis to make decisions about where to add classes and sections. At this point, the university has invested in the most in-demand classes in fields such as computer science, biology, chemistry, psychology and business.

The school has also added capacity in writing, math and languages.

Stony Brook is focused on experiential opportunities across four domains: study abroad, internships, research and entrepreneurship.

The school is developing plans for additional makerspaces, which are places where people with shared interests can come together to use equipment and exchange ideas and information.

New hires

Stony Brook is in the middle of a hiring cycle and is likely to “bring the largest group of new faculty we’ve had in many years” on board, the provost said. “This is going to have a big impact on the student experience” including research, climate science, artificial intelligence and healthy aging.

The additional hires will create more research experiences for undergraduates, Lejuez said.

Stony Brook recently created a Center for Healthy Aging, CHA, which combines researchers and clinicians who are focused on enhancing the health and wellness of people as they age.

Amid a host of new opportunities, a rise in the US News and World Report rankings and a victory in the city’s Governors Island contest to create a climate solutions center, Stony Brook has seen an increase in applications from the state, the country and other countries.

This year, about 10,000 students applied to Stony Brook’s first early action admissions process, which Lejuez described as a “great success.”

Amid a world in which regional conflicts have had echoes of tension and disagreement in academic institutions around the country and with an election cycle many expect will be especially contentious, Stony Brook’s Humanities Institute has put together several programs.

This includes a talk on “Muslim and Jewish Relations in the Middle Ages” on February 15th, another on “The Electoral Imagination: Literature, Legitimacy, and Other Rigged Systems” on April 17th and, among others, a talk on April 18th titled “The Problem of Time for Democracies.

True to the core values

Amid all the growth, Stony Brook, led by President Maurie McInnis, plans to continue to focus on its core values.

Lejuez said some people have asked, “are we still going to be the university that really provides social mobility opportunities in ways that are just not available in other places? We will always be that. Everything else happens in the context” of that goal. 

Ali Khosronejad in front of the Santa Maria Cathedral, which is considered the first modern cathedral in Madrid.

By Daniel Dunaief

An approaching weather front brings heavy rains and a storm surge, threatening to inundate homes and businesses with dangerous water and potentially undermining critical infrastructure like bridges.

Once officials figure out the amount of water that will affect an area, they can either send out inspectors to survey the exact damage or they can use models that take time to process and analyze the likely damage.

Ali Khosronejad

Ali Khosronejad, Associate Professor in the Department of Civil Engineering at Stony Brook University, hopes to use artificial intelligence to change that.

Khosronejad recently received $550,000 from the National Science Foundation (NSF) for four years to create a high-fidelity model using artificial intelligence that will predict the flood impact on infrastructure.

The funds, which will be available starting on June 20, will support two PhD students who will work to provide an artificial intelligence-based program that can work on a single laptop at a “fraction of the cost of more advanced modeling approaches,” Khosronejad said during an interview in Madrid, Spain, where he is on sabbatical leave under a Fulbright U.S Senior Scholar Award. He is doing his Fulbright research at Universidad Carlos III de Madrid.

Stony Brook University will also provide some funding for these students, which will help defray the cost of expenses related to traveling and attending conferences and publishing papers.

In the past, Stony Brook has been “quite generous when it comes to supporting graduate students working on federally funded projects,” Khosronejad explained and he hopes that continues with this research.

Khosronejad and his students will work with about 50 different flooding and terrain scenarios, which will cover about 95 percent of extreme flooding. These 50 possibilities will cover a range of waterways, infrastructure, topography, and coastal areas. The researchers will feed data into their high fidelity supercomputing cluster simulations to train artificial intelligence to assess the likely damage from a flood.

As they build the model, Khosronejad explained that they will collect data from floods, feed them into the computer and test how well the computer predicts the kind of flooding that has can cause damage or threaten the stability of structures like bridges. Over the next four years, the team will collect data from the Departments of Transportation in California, Minnesota and New York.

Nearly six years ago, his team attempted to use algorithms available in ChatGPT for some of his AI development. Those algorithms, however, didn’t predict flood flow prediction. He tried to develop new algorithms based on convolutional neural networks. Working with CNN, he attempted to improve its capabilities by including some physics-based constraints.

“We are very enthusiastic about this,” Khosronejad said. “We do think that this opportunity can help us to open up the use of AI for other applications in fluid mechanics” in fields such as renewable energy, contaminant transport predictions in urban areas and biological flow predictions, among others.

Planners working with groups such as the California Department of Transportation could use such a program to emphasize which infrastructure might be endangered.

This analysis could highlight effective mitigation strategies. Artificial intelligence can “provide [planners and strategists] with a tool that is not that expensive, can run on a single laptop, can reproduce lots of scenarios with flooding, to figure out which infrastructure is really in danger,” Khosronejad said.

Specifically, this tool could evaluate the impact of extreme floods on bridge foundations. Floods can remove soil from around the foundation of a bridge, which can cause it to collapse. Civil engineers can strengthen bridge foundations and mitigate the effect of future floods by using riprap, which is a layer of large stones.

This kind of program can reduce the reliance on surveying after a flood, which is expensive and sometimes “logistically impossible and unsafe” to monitor areas like the foundations of bridges, Khosronejad said. He plans to build into the AI program an awareness of the changing climate, so that predictions using it in three or five years can provide an accurate reflection of future conditions.

“Floods are getting more and more extreme” he said. “We realize that floods we feed into the program during training will be different” from the ones that will cause damage in subsequent years.

Floods that had a return period of every 100 years are now happening much more frequently. In one or two decades, such a flood might occur every 10 years.

Adding updated data can allow practitioners to make adjustments to the AI program a decade down the road, he suggested. He and his team will add data every year, which will create a more versatile model.

What it can’t do

While the AI programs will predict the damage to infrastructure from floods, they will not address storm or flood predictions.

“Those are different models, based on the movement of clouds” and other variables, Khosronejad said. “This doesn’t do that: if you give the program a range of flood magnitudes, it will tell you what will happen.”

High fidelity models currently exist that can do what Khosronejad is proposing, although those models require hundreds of CPUs to run for five months. Khosronejad has developed his own in house high fidelity model that is capable of making similar predictions. He has tested it to examine various infrastructures and used it to study various flooding events. These models are expensive, which is why he’s trying to replace them with AI to reduce the cost while maintaining fidelity.

AI, on the other hand, can run on a single CPU and may be able to provide the same result, which will allow people to plan ahead before it happens. The NSF approved the single principal investigator concept two months ago.

Khosronejad has worked with Fotis Sotiropoulos, former Dean of the College of Engineering and Applied Sciences at Stony Brook and current Provost at Virginia Commonwealth University, on this and other projects.

The two have bi-weekly discussions over the weekend to discuss various projects.

Sotiropoulos was “very happy” when Khosronejad told him he received the funds. Although he’s not a part of the project, Sotiropoulos will “provide inputs.”

Sotiropoulos has “deep insights” into fluid mechanics. “When you have him on your side, it always pays off,” Khosronejad said.

Pixabay photo

By Leah S. Dunaief

Leah Dunaief

You’ve heard of ChatGPT, yes? So had a lawyer in Brooklyn from his college-aged children. While the lawyer has been in practice for 30 years, he had no prior experience with the Open AI chatbot. But when he was hired in a lawsuit against the airline Avianca and went into Federal District Court with his legal brief filled with judicial opinions and citations, poor guy, he made history.

All the evidence he was bringing to the case was generated by ChatGPT. All of it was false: creative writing generated by the bot.

Here is the story, as told in The New York Times Business Section on June 9. A passenger, who had sued the airline for injury to his knee by a metal serving cart as it was rolled down the aisle in 2019 on a flight from El Salvador to New York, was advised that the lawsuit should be dismissed because the statute of limitations had expired. His lawyer, however, responded with the infamous 10-page brief offering more than half a dozen court decisions supporting their argument that the case should be allowed to proceed. There was only one problem: None of the cases cited in the brief could be found.

The decisions, although they named previous lawsuits against Delta Airlines, Korean Airlines and China Southern Airlines, and offered realistic names of supposedly injured passengers, were not real.

“I heard about this new site, which I falsely assumed was, like, a super search engine,” lamely offered the embarrassed attorney.

“Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet,” explained The NYT.

Now the lawyer stands in peril of being sanctioned by the court. He declared that he had asked questions of the bot and had gotten in response genuine case citations, which he had included in his brief. He also printed out and included his dialogue with ChatGPT, which ultimately at the end, offered him the words, “I hope that helps.”

But the lawyer had done nothing further to ensure that those cases existed. They seemed professional enough to fool the professional.

Now the tech world, lawyers and judges are fixated on this threat to their profession. And warnings exist of that threat being carried over to all of humanity with erroneous generative AI.

But this is not an entirely ominous story.

Researchers at Open AI and the University of Pennsylvania have concluded that 80% of the U.S. workforce could see an effect on at least 10% of their tasks, according to The NYT. That means that some 300 million full-time jobs could be affected by AI. But is that all bad? Could AI become a helpful tool?

By using AI as an assistant, humans can focus on the judgment aspect of data-driven decision-making, checking and interpreting the information provided by the bot. Humans provide judgment over what is provided by a bot.

Ironically, the lawyer’s children probably passed their ChatGPT-fueled courses with good grades. Part of that is the way we teach students, offering them tons of details to memorize and regurgitate on tests or in term papers. The lawyer should have judged his ChatGPT-supplied data. Future lawyers now know they must. 

As for education, emphasis should go beyond “what” and even “so what” to “what’s next.”  Learning should be about once having facts or history, then how to think, to analyze, how to interpret and take the next steps. Can chatbots do that? Perhaps in an elementary way they now can. Someday they will in a larger context. And that poses a threat to the survival of humanity, because machines will no longer need us.

Artificial Intelligence. Pixabay photo

By Michael E. Russell

Michael E. Russell

Two weeks ago I had the scary experience of watching 60 Minutes on CBS. The majority of the telecast pertained to A.I. (artificial intelligence). Scott Pelley of CBS interviewed Google CEO Sandar Pichai. His initial quote was that A.I. “will be as good or as evil as human nature allows.” The revolution, he continued, “is coming faster than one can imagine.”

I realize that my articles should pertain to investing, however, this 60 Minutes segment made me question where we as a society are headed.

Google and Microsoft are investing billions of dollars into A.I. using microchips built by companies such as Nvidia. What CEO Sundar has been doing since 2019 is leading both Google and its parent company Alphabet, valued at $1.3 trillion. Worldwide, Google runs 90% of internet searches and 70% of smartphones. It is presently in a race with Microsoft for A.I. dominance. 

Two months ago Microsoft unveiled its new chatbot. Google responded by releasing its own version named Bard. As the segment continued, we were introduced to Bard by Google Vice President Sissie Hsiao. The first thing that hit me was that Bard does not scroll for answers on the internet like the Google search engine does.

What is confounding is that with microchips built by companies such as Nvidia, they are more than 100 thousand times faster than the human brain. In my case, maybe 250 thousand times faster! 

Bard was asked to summarize the New Testament as a test. It accomplished this in 5 seconds. Using Latin, it took 4 seconds.  I need to sum this up. In 10 years A.I. will impact all aspects of our lives. The revolution in artificial intelligence is in the middle of a raging debate that has people on one side hoping it will save humanity, while others are predicting doom. I believe that we will be having many more conversations in the near future.

Okay folks, where is the economy today?  Well, apparently inflation is still a major factor in our everyday life. The Fed will probably increase rates for a 10th time in less than 2 years.

Having been employed by various Wall Street firms over the past 4 decades, I have learned that high priced analysts have the ability to foresee market direction no better than my grandchildren.

Looking back to May 2011, our savvy elected officials increased our debt-ceiling which led to the first ever downgrade of U.S. debt from its top triple A rating from S&P. This caused a very quick 19% decline in the S&P index.  Sound familiar?

It appears that the only time Capitol Hill tries to solve the debt ceiling impasse is when their own portfolio is affected.

This market rally has been led by chatbot affiliated companies. These stocks have added $1.4 trillion in stock market value this year. Keep in mind that just 6 companies were responsible for almost 60% of S&P gains.  These are the 6 leaders: Microsoft, Alphabet, Amazon, Meta Platform, Salesforce and of course, Nvidia.

In the meantime, the Administration states that inflation has been reined in.  What stores are they shopping in? Here is the data release from Washington. Year over Year changes March 2022-March 2023:

• Food and non-alcoholic beverages up 8.1%

• Bread and cereal products up 10.8%

• Meat and seafood up 4.3%

• Electricity up 15.7%

When 1 pound of hot dogs rises from $3.25 to $7.50, that is not 8.1%. When Froot Loops go from $1.89 to $5.14 we are in trouble. The bureaucrats in D.C. make up numbers worse than George Santos.

On a positive note, the flowers are starting to bloom, the grass is starting to grow and we live in a special place. Of historic significance, we happen to be home to the second oldest active Episcopal Church in the United States. This year Caroline Church in Setauket will be celebrating its 300th anniversary. Congratulations.

Michael E. Russell retired after 40 years working for various Wall Street firms. All recommendations being made here are not guaranteed and may incur a loss of principal. The opinions and investment recommendations expressed in the column are the author’s own. TBR News Media does not endorse any specific investment advice and urges investors to consult with their financial advisor. 

Pixabay photo

By Leah S. Dunaief

Leah Dunaief

To see how it worked, when I got the link for ChatGPT, the Artificial Intelligence or OpenAI software sent from my grandson, I asked it to write 650 words about Franklin Delano Roosevelt. Why FDR? I had just finished reading a book about FDR’s “Splendid Deception,” the ways the president hid his polio infirmity from the American public, and he was on my mind. 

So, I am going to let you see what was written immediately, on command, in less than five minutes before my disbelieving eyes. Remember, this was authored by a robot.

President Franklin D. Roosevelt, commonly known as FDR, is remembered as one of the most influential and transformative leaders in American history. During his 12-year tenure as President, he led the United States through some of its most challenging times, including the Great Depression and World War II.

FDR was born into a wealthy family in Hyde Park, New York, in 1882. He attended Harvard University and Columbia Law School before beginning a career in politics. He served as a New York state senator from 1910 to 1913, and later as Assistant Secretary of the Navy under President Woodrow Wilson.

In 1921, at the age of 39, FDR was struck by polio, which left him paralyzed from the waist down. Despite this setback, he remained active in politics and was elected Governor of New York in 1928.

It was during the Great Depression, however, that FDR’s leadership would truly shine. When he took office in 1933, the country was in the midst of the worst economic crisis in its history. Millions of Americans were out of work, banks were failing, and confidence in the government was at an all-time low.

FDR responded to the crisis with a bold and ambitious plan to revitalize the economy, which he called the New Deal. The New Deal included a range of programs and policies designed to provide relief to the unemployed, stimulate economic growth, and reform the financial system.

One of the most famous New Deal programs was the Civilian Conservation Corps, which provided jobs to young men in conservation and reforestation projects. Other programs included the Agricultural Adjustment Act, which sought to stabilize crop prices and support farmers, and the National Industrial Recovery Act, which established codes of fair competition for businesses.

FDR’s New Deal was not without its critics, who argued that it represented an unwarranted expansion of federal power and an infringement on individual liberties. But for many Americans, especially those who were struggling to make ends meet during the Depression, FDR’s bold leadership and willingness to take decisive action were a source of hope and inspiration.

FDR was also a key figure in American foreign policy during World War II. When the war broke out in Europe in 1939, he declared that the United States would remain neutral. But after the Japanese attack on Pearl Harbor in 1941, he led the country into the war with a resolve and determination that would help turn the tide against the Axis powers.

FDR was instrumental in building the alliance with the Soviet Union and the United Kingdom, which would eventually lead to victory in Europe. He also played a key role in the development of the atomic bomb, which would be used to end the war in the Pacific.

Despite his many accomplishments, FDR was not without his flaws. Critics have accused him of overreaching in his use of executive power, and of turning a blind eye to human rights abuses committed by the Soviet Union during the war. But his legacy remains one of visionary leadership and bold action in the face of crisis.

FDR’s famous quote, “The only thing we have to fear is fear itself,” has become a rallying cry for leaders in times of crisis. It reflects his belief that the American people had the strength and resilience to overcome even the most daunting challenges, and that with the right leadership and the right policies, anything was possible.

Today, as the United States faces new challenges, from economic inequality to political polarization to climate change, FDR’s legacy remains a source of inspiration and guidance. His commitment to bold action in the face of crisis, his belief in the power of government to make a positive difference in people’s lives, and his unwavering faith in the American people, continue to resonate with us today.

How do teachers distinguish what was written by a student or by the bot? By the same token, having a bot help in other contexts, such as planning a schedule or working up a travel itinerary can be a huge help. ChatGPT, launched this past November, can analyze content from a wide range of information on the internet and then create original content. But is it always correct information? Will the bot take jobs from humans? Or will it merely offer a baseline of information upon which humans can then add more nuanced input?

Stay tuned.

METRO image

By Daniel Dunaief

Daniel Dunaief

I’m really writing this. Or am I?

Now that I’ve seen artificial intelligence in action, I know that the system, such as it is, can write impressive pieces in much shorter time than it takes me to write a column or even this sentence.

And yet, I don’t want a machine to write for me or to reach out to you. I prefer the letter by letter, word by word approach I take and would like to think I earn the smile, frown or anything in between I put on your face as a result of the thinking and living I’ve done.

However, I do see opportunities for AI to become the equivalent of a personal assistant, taking care of needed conveniences and reducing inconveniences. For conveniences, how about if AI did the following:

Grocery shopping: I’m sure I get similar foods each week. Maybe my AI system could not only buy the necessary and desired food items, but perhaps it could reduce the ones that are unhealthy or offer new recipes that satisfy my food preferences.

Dishes: I’m not looking for a robot akin to “The Jetsons,” but would love to have a system that removed the dirt and food from my dishes, put them in the dishwasher, washed them and then put them away. An enhanced system also might notice when a dish wasn’t clean and would give that dish another wash.

Laundry: Okay, I’ll admit it. I enjoy folding warm laundry, particularly in the winter, when my cold hands are starting to crack from being dry. Still, it would save time and energy to have a laundry system that washed my clothes, folded them and put them away, preferably so that I could see and access my preferred clothing.

Pharmacy: I know this is kind of dangerous when it comes to prescriptions, but it’d be helpful to have a system that replenished basic, over-the-counter supplies, such as band-aids. Perhaps it could also pick out new birthday and greeting cards that expressed particular sentiments in funny yet tasteful ways for friends and family who are celebrating milestone birthdays or are living through other joyful or challenging times.

For the inconveniences, an AI system would help by:

Staying on hold: At some point, we’ve all waited endlessly on hold for some company to pick up the phone to speak to us about changing our flights, scheduling a special dinner reservation or speaking with someone about the unusual noise our car makes. Those “on hold” calls, with their incessant chatter or their nonstop hold music, can be exasperating. An AI system that waited patiently, without complaint or frustration and that handed me the phone the moment a person picked up the call, would be a huge plus.

Optimize necessary updates: Car inspections, annual physicals, oil changes, and trips to the vet can and do go on a calendar. Still, it’d be helpful to have an AI system that recognizes these regular needs and coordinates an optimal time (given my schedule and the time it’ll take to travel to and from these events) to ensure I don’t miss an appointment and to minimize the effort necessary.

Send reminders to our children: Life is full of balances, right? Too much or too little of something is unhealthy. These days, we sometimes have to write or text our kids several times before we get to speak with them live. An AI system might send them a casual, but loving, reminder that their not-so-casual but loving parents would like to speak with them live.

Provide a test audience: In our heads, we have the impulse to share something funny, daring or challenging, like, “hey, did you get dressed in the dark” or “wow, it must be laundry day.” Sure, that might be funny, but an AI system designed to appreciate humor in the moment — and to have an awareness of our audience — might protect us from ourselves. Funny can be good and endearing, but can also annoy.