Tags Posts tagged with "artificial intelligence"

artificial intelligence

Tom Cassidy with his late father, Hugh 'Joe' Cassidy. Photo by Jonathan Spier

By Thomas M. Cassidy

Thomas M. Cassidy

Artificial Intelligence (AI) will cost many people their jobs. But some occupations desperately needed by a rapidly aging population cannot be replaced by computers or machines. For example, nurse assistants in hospitals and nursing homes.

Research conducted by Goldman Sachs estimates that 25% of current work tasks could be automated by Artificial Intelligence (AI). Unlike prior technological advances that replaced workers in labor intensive occupations, this time “it’s the higher-paying jobs where a college education and analytical skills can be a plus that have a high level of exposure to AI,” according to The Pew Research Center.

During my twenty-year career as an investigator for the New York State Attorney General’s Office, I conducted many investigations of potential patient abuse in nursing homes and other health facilities. I had the privilege of meeting hundreds of nursing assistants. Most were dedicated, knowledgeable and compassionate, but a few were not. Nursing aides dress, bathe, toilet and ambulate patients among many other services. Sometimes they also interact with families, which can be a difficult task. Let me explain:

I was assigned to investigate a possible case of patient abuse at a nursing home. An elderly woman with a doctor’s order for a two-person transfer was helped from her bed for a bathroom trip by only one nurse aide. The elderly woman fell and fractured her hip. The nursing assistant was immediately suspended pending an investigation. My assignment was to investigate this incident as a possible crime. Here’s what happened:

The nursing home patient had a visit from her daughter. Mom told her daughter to help her get out of bed and walk her to the bathroom. The daughter obeyed and helped mom get out of bed. The daughter tried to hold her up, but mom was weak and started to slip. The daughter screamed for help. A nursing assistant rushed to help the falling patient, but it was too late. Mom fell and fractured her hip. There was no crime. The nursing assistant returned to work the next day.

Fast forward twenty years. My father, a World War II combat veteran and a decorated NYPD Detective Commander, fractured his hip at age 80. I visited him at the Long Island State Veterans Nursing Home in Stony Brook. He was alone in his room. He says, “Tom, help me get to the bathroom.” I say, “Dad, let me get an aide to help you.” He says, “YOUR MY SON, just do this for me. I don’t want anyone else to help.” I told him about the elderly woman who fractured her hip when her daughter tried to help her. He said, okay, go get someone to help. If not for my experience as an investigator, I might have tried to help my father. I was taught “To Honor Thy Father and Thy Mother.” But instead, two aides moved my dad safely to the bathroom and back into his bed. Nine months later he walked out of the nursing home to live at home with my mother. 

Not every resident of a nursing home is elderly, but most are. In the United States today, one in every six Americans is age 65 or older. That number will increase dramatically in the next six years to 20% of the population or 70 million older Americans. Incredibly, nursing homes are closing, instead of opening. 

The American Health Care Association reports that since 2020 almost 600 nursing homes have closed, and more than half of nursing homes limit new admissions due to staffing shortages. As a result, there is a shortfall of hospital beds nationwide because displaced nursing home patients remain in hospital beds until they can be safely transferred home or to a care facility.

The Massachusetts Hospital Association reports that one out of every seven medical-surgical beds are unavailable due to patients remaining in the hospital when they no longer need hospital care. Keep in mind that hospitals are required by federal law to provide emergency care, stabilize patients, and discharge patients to a safe environment.

The Bureau of Labor Statistics reports that nursing assistants have one of the highest rates of injuries and illnesses because they frequently move patients and perform other physically demanding tasks. For these, and many other tasks, nurse assistants are paid a median wage of less than $18 per hour; not even close to a salary that is in line with the responsibilities of their job. Small wonder that a survey by the American Health Care Association found that one of the biggest obstacles for hiring new staff in nursing homes is a lack of interested candidates.

Reversing the hemorrhage of nursing home closures requires leaders with Natural Intelligence (NI). It benefits all generations of Americans when hospitals fulfill their mission for acute care and not operate as quasi-nursing homes. After all, languishing in a crowded emergency room “Can Be Hazardous To Your Health!”

Thomas M. Cassidy is the creator of the TV series, Manhattan South, which is in development. (ktpgproductions.com)

This graphic summarizes shifts in public attitudes about AI, according to the Stony Brook-led survey. Image by Jason Jones

A Stony Brook University study suggests that on average, U.S. adults have gained confidence in the capabilities of AI and grown increasingly opposed to extending human rights to advanced AI systems.

In 2021, two Stony Brook University researchers – Jason Jones, PhD, Associate Professor in the Department of Sociology, and Steven Skiena, PhD, Distinguished Teaching Professor in the Department of Computer Science – began conducting a survey study on attitudes toward artificial intelligence (AI) among American adults. Some of their recent findings, published in the journal Seeds of Science, show a shift in Americans’ views on AI.

The researchers compared data collected from random, representative samples in 2021 and 2023 to determine whether public attitudes toward AI have changed amid recent technological developments – most notably the launch of OpenAI’s ChatGPT chatbot in late 2022. The new work builds on previous research into how AI is perceived in society, by way of the Jones-Skiena Public Opinion of Artificial Intelligence Dashboard and similar survey studies conducted with varying demographics.

The new study sampled two unique groups of nearly 500 Americans ages 18 and above, one of which was surveyed in March 2021 and the other in April 2023. Participants shared their opinions on the achievability of constructing a computer system able to perform any intellectual task a human is capable of, whether such a system should be built at all, and/or if that system – referred to as Artificial General Intelligence (AGI) – should be afforded the same rights as a human being.

Google Surveys was originally used as the platform for this research due to its capability of delivering random, representative samples.

“What we truly wanted to know was the distribution and average of public opinion in the U.S. population,” says Jones, co-author and also a member of Stony Brook’s Institute for Advanced Computational Science (IACS). “A random, representative sample is the gold standard for estimating that in survey research. Google shut down their Google Surveys product in late 2022, so we used another platform called Prolific to do the same thing for the second sample.”

Once the samples were collated, a statistically significant change in opinion was revealed regarding whether an AGI system is possible to build and whether it should have the same rights as a human.

In 2023, American adults more strongly believed in the achievability of AGI, yet were more adamantly against affording such systems the same rights as human beings. There was no statistically significant change in public opinion on whether AGI should be built, which was weakly favored across both samples.

Jones and Skiena stress that more studies must be conducted to better understand public perceptions of artificialintelligence as the technology continues to grow in societal relevance.

They will repeat the survey this spring with the same methods used in 2023 with the hope of  building further on their findings.

Stony Brook University admissions office where about 10,000 students applied through the school’s first early action program. Photo courtesy Stony Brook University

By Daniel Dunaief

For Stony Brook University, 2024 will be the year of more, as in more college counselors, more classes, more study abroad opportunities, more artificial intelligence and more faculty.

The downstate flagship university, which is a member of the Association of American Universities and has been climbing the rankings of colleges from US News and World Reports, plans to address several growing needs.

“We have invested heavily in new advisors,” said Carl Lejuez, executive vice president and provost at Stony Brook, in a wide ranging interview. These advisors will be coming on board throughout the semester.

With additional support from the state and a clear focus on providing constructive guidance, the university is working to reduce the number of students each advisor has, enabling counselors to “focus on the students they are serving,” Lejuez said.

Advisors will help students work towards graduation and will hand off those students to an engaged career center.

At the same time, Stony Brook is expanding its global footprint. Lejuez said study abroad options were already “strong” in Europe, while the university is developing additional opportunities in Asia and Africa.

The university prioritizes making study abroad as affordable as possible, offering several scholarships from the office of global affairs and through individual departments.

Students aren’t always aware that “they can study abroad in any SBU-sponsored program for a semester and keep all of their existing federal aid and scholarships and in many cases the full cost of that semester abroad is comparable and sometimes even less expensive” than what the student would spend on Long Island, Lejuez explained in an email.

Stony Brook University Executive Vice President and Provost Carl Lejuez. Photo courtesy Conor Harrigan

As for artificial intelligence, Stony Brook plans to expand on existing work in the realm of teaching, mentoring, research and community outreach.

In efforts sponsored by the Center for Excellence in Learning and the Library, the university is holding multiple training sessions for faculty to discuss how they approach AI in their classrooms.

The library opened an AI Lab that will enable students to experiment, innovate and work on AI projects, Lejuez said. The library plans to hire several new librarians with expertise in AI, machine learning and innovation.

The library is training students on the ethical use of AI and will focus on non-STEM disciplines to help students in the arts, humanities and social sciences.

Artificial intelligence “has its strengths and weaknesses,” said Lejuez. “We are not shying away from it.”

As for the community, the hope is that Stony Brook will use the semester to develop plans for kindergarten through 12th grade and then launch the expansion later this spring.

Additional classes

Lejuez acknowledged that class capacity created challenges in the past.

Stony Brook is using predictive analysis to make decisions about where to add classes and sections. At this point, the university has invested in the most in-demand classes in fields such as computer science, biology, chemistry, psychology and business.

The school has also added capacity in writing, math and languages.

Stony Brook is focused on experiential opportunities across four domains: study abroad, internships, research and entrepreneurship.

The school is developing plans for additional makerspaces, which are places where people with shared interests can come together to use equipment and exchange ideas and information.

New hires

Stony Brook is in the middle of a hiring cycle and is likely to “bring the largest group of new faculty we’ve had in many years” on board, the provost said. “This is going to have a big impact on the student experience” including research, climate science, artificial intelligence and healthy aging.

The additional hires will create more research experiences for undergraduates, Lejuez said.

Stony Brook recently created a Center for Healthy Aging, CHA, which combines researchers and clinicians who are focused on enhancing the health and wellness of people as they age.

Amid a host of new opportunities, a rise in the US News and World Report rankings and a victory in the city’s Governors Island contest to create a climate solutions center, Stony Brook has seen an increase in applications from the state, the country and other countries.

This year, about 10,000 students applied to Stony Brook’s first early action admissions process, which Lejuez described as a “great success.”

Amid a world in which regional conflicts have had echoes of tension and disagreement in academic institutions around the country and with an election cycle many expect will be especially contentious, Stony Brook’s Humanities Institute has put together several programs.

This includes a talk on “Muslim and Jewish Relations in the Middle Ages” on February 15th, another on “The Electoral Imagination: Literature, Legitimacy, and Other Rigged Systems” on April 17th and, among others, a talk on April 18th titled “The Problem of Time for Democracies.

True to the core values

Amid all the growth, Stony Brook, led by President Maurie McInnis, plans to continue to focus on its core values.

Lejuez said some people have asked, “are we still going to be the university that really provides social mobility opportunities in ways that are just not available in other places? We will always be that. Everything else happens in the context” of that goal. 

Ali Khosronejad in front of the Santa Maria Cathedral, which is considered the first modern cathedral in Madrid.

By Daniel Dunaief

An approaching weather front brings heavy rains and a storm surge, threatening to inundate homes and businesses with dangerous water and potentially undermining critical infrastructure like bridges.

Once officials figure out the amount of water that will affect an area, they can either send out inspectors to survey the exact damage or they can use models that take time to process and analyze the likely damage.

Ali Khosronejad

Ali Khosronejad, Associate Professor in the Department of Civil Engineering at Stony Brook University, hopes to use artificial intelligence to change that.

Khosronejad recently received $550,000 from the National Science Foundation (NSF) for four years to create a high-fidelity model using artificial intelligence that will predict the flood impact on infrastructure.

The funds, which will be available starting on June 20, will support two PhD students who will work to provide an artificial intelligence-based program that can work on a single laptop at a “fraction of the cost of more advanced modeling approaches,” Khosronejad said during an interview in Madrid, Spain, where he is on sabbatical leave under a Fulbright U.S Senior Scholar Award. He is doing his Fulbright research at Universidad Carlos III de Madrid.

Stony Brook University will also provide some funding for these students, which will help defray the cost of expenses related to traveling and attending conferences and publishing papers.

In the past, Stony Brook has been “quite generous when it comes to supporting graduate students working on federally funded projects,” Khosronejad explained and he hopes that continues with this research.

Khosronejad and his students will work with about 50 different flooding and terrain scenarios, which will cover about 95 percent of extreme flooding. These 50 possibilities will cover a range of waterways, infrastructure, topography, and coastal areas. The researchers will feed data into their high fidelity supercomputing cluster simulations to train artificial intelligence to assess the likely damage from a flood.

As they build the model, Khosronejad explained that they will collect data from floods, feed them into the computer and test how well the computer predicts the kind of flooding that has can cause damage or threaten the stability of structures like bridges. Over the next four years, the team will collect data from the Departments of Transportation in California, Minnesota and New York.

Nearly six years ago, his team attempted to use algorithms available in ChatGPT for some of his AI development. Those algorithms, however, didn’t predict flood flow prediction. He tried to develop new algorithms based on convolutional neural networks. Working with CNN, he attempted to improve its capabilities by including some physics-based constraints.

“We are very enthusiastic about this,” Khosronejad said. “We do think that this opportunity can help us to open up the use of AI for other applications in fluid mechanics” in fields such as renewable energy, contaminant transport predictions in urban areas and biological flow predictions, among others.

Planners working with groups such as the California Department of Transportation could use such a program to emphasize which infrastructure might be endangered.

This analysis could highlight effective mitigation strategies. Artificial intelligence can “provide [planners and strategists] with a tool that is not that expensive, can run on a single laptop, can reproduce lots of scenarios with flooding, to figure out which infrastructure is really in danger,” Khosronejad said.

Specifically, this tool could evaluate the impact of extreme floods on bridge foundations. Floods can remove soil from around the foundation of a bridge, which can cause it to collapse. Civil engineers can strengthen bridge foundations and mitigate the effect of future floods by using riprap, which is a layer of large stones.

This kind of program can reduce the reliance on surveying after a flood, which is expensive and sometimes “logistically impossible and unsafe” to monitor areas like the foundations of bridges, Khosronejad said. He plans to build into the AI program an awareness of the changing climate, so that predictions using it in three or five years can provide an accurate reflection of future conditions.

“Floods are getting more and more extreme” he said. “We realize that floods we feed into the program during training will be different” from the ones that will cause damage in subsequent years.

Floods that had a return period of every 100 years are now happening much more frequently. In one or two decades, such a flood might occur every 10 years.

Adding updated data can allow practitioners to make adjustments to the AI program a decade down the road, he suggested. He and his team will add data every year, which will create a more versatile model.

What it can’t do

While the AI programs will predict the damage to infrastructure from floods, they will not address storm or flood predictions.

“Those are different models, based on the movement of clouds” and other variables, Khosronejad said. “This doesn’t do that: if you give the program a range of flood magnitudes, it will tell you what will happen.”

High fidelity models currently exist that can do what Khosronejad is proposing, although those models require hundreds of CPUs to run for five months. Khosronejad has developed his own in house high fidelity model that is capable of making similar predictions. He has tested it to examine various infrastructures and used it to study various flooding events. These models are expensive, which is why he’s trying to replace them with AI to reduce the cost while maintaining fidelity.

AI, on the other hand, can run on a single CPU and may be able to provide the same result, which will allow people to plan ahead before it happens. The NSF approved the single principal investigator concept two months ago.

Khosronejad has worked with Fotis Sotiropoulos, former Dean of the College of Engineering and Applied Sciences at Stony Brook and current Provost at Virginia Commonwealth University, on this and other projects.

The two have bi-weekly discussions over the weekend to discuss various projects.

Sotiropoulos was “very happy” when Khosronejad told him he received the funds. Although he’s not a part of the project, Sotiropoulos will “provide inputs.”

Sotiropoulos has “deep insights” into fluid mechanics. “When you have him on your side, it always pays off,” Khosronejad said.

Pixabay photo

By Leah S. Dunaief

Leah Dunaief

You’ve heard of ChatGPT, yes? So had a lawyer in Brooklyn from his college-aged children. While the lawyer has been in practice for 30 years, he had no prior experience with the Open AI chatbot. But when he was hired in a lawsuit against the airline Avianca and went into Federal District Court with his legal brief filled with judicial opinions and citations, poor guy, he made history.

All the evidence he was bringing to the case was generated by ChatGPT. All of it was false: creative writing generated by the bot.

Here is the story, as told in The New York Times Business Section on June 9. A passenger, who had sued the airline for injury to his knee by a metal serving cart as it was rolled down the aisle in 2019 on a flight from El Salvador to New York, was advised that the lawsuit should be dismissed because the statute of limitations had expired. His lawyer, however, responded with the infamous 10-page brief offering more than half a dozen court decisions supporting their argument that the case should be allowed to proceed. There was only one problem: None of the cases cited in the brief could be found.

The decisions, although they named previous lawsuits against Delta Airlines, Korean Airlines and China Southern Airlines, and offered realistic names of supposedly injured passengers, were not real.

“I heard about this new site, which I falsely assumed was, like, a super search engine,” lamely offered the embarrassed attorney.

“Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet,” explained The NYT.

Now the lawyer stands in peril of being sanctioned by the court. He declared that he had asked questions of the bot and had gotten in response genuine case citations, which he had included in his brief. He also printed out and included his dialogue with ChatGPT, which ultimately at the end, offered him the words, “I hope that helps.”

But the lawyer had done nothing further to ensure that those cases existed. They seemed professional enough to fool the professional.

Now the tech world, lawyers and judges are fixated on this threat to their profession. And warnings exist of that threat being carried over to all of humanity with erroneous generative AI.

But this is not an entirely ominous story.

Researchers at Open AI and the University of Pennsylvania have concluded that 80% of the U.S. workforce could see an effect on at least 10% of their tasks, according to The NYT. That means that some 300 million full-time jobs could be affected by AI. But is that all bad? Could AI become a helpful tool?

By using AI as an assistant, humans can focus on the judgment aspect of data-driven decision-making, checking and interpreting the information provided by the bot. Humans provide judgment over what is provided by a bot.

Ironically, the lawyer’s children probably passed their ChatGPT-fueled courses with good grades. Part of that is the way we teach students, offering them tons of details to memorize and regurgitate on tests or in term papers. The lawyer should have judged his ChatGPT-supplied data. Future lawyers now know they must. 

As for education, emphasis should go beyond “what” and even “so what” to “what’s next.”  Learning should be about once having facts or history, then how to think, to analyze, how to interpret and take the next steps. Can chatbots do that? Perhaps in an elementary way they now can. Someday they will in a larger context. And that poses a threat to the survival of humanity, because machines will no longer need us.

Artificial Intelligence. Pixabay photo

By Michael E. Russell

Michael E. Russell

Two weeks ago I had the scary experience of watching 60 Minutes on CBS. The majority of the telecast pertained to A.I. (artificial intelligence). Scott Pelley of CBS interviewed Google CEO Sandar Pichai. His initial quote was that A.I. “will be as good or as evil as human nature allows.” The revolution, he continued, “is coming faster than one can imagine.”

I realize that my articles should pertain to investing, however, this 60 Minutes segment made me question where we as a society are headed.

Google and Microsoft are investing billions of dollars into A.I. using microchips built by companies such as Nvidia. What CEO Sundar has been doing since 2019 is leading both Google and its parent company Alphabet, valued at $1.3 trillion. Worldwide, Google runs 90% of internet searches and 70% of smartphones. It is presently in a race with Microsoft for A.I. dominance. 

Two months ago Microsoft unveiled its new chatbot. Google responded by releasing its own version named Bard. As the segment continued, we were introduced to Bard by Google Vice President Sissie Hsiao. The first thing that hit me was that Bard does not scroll for answers on the internet like the Google search engine does.

What is confounding is that with microchips built by companies such as Nvidia, they are more than 100 thousand times faster than the human brain. In my case, maybe 250 thousand times faster! 

Bard was asked to summarize the New Testament as a test. It accomplished this in 5 seconds. Using Latin, it took 4 seconds.  I need to sum this up. In 10 years A.I. will impact all aspects of our lives. The revolution in artificial intelligence is in the middle of a raging debate that has people on one side hoping it will save humanity, while others are predicting doom. I believe that we will be having many more conversations in the near future.

Okay folks, where is the economy today?  Well, apparently inflation is still a major factor in our everyday life. The Fed will probably increase rates for a 10th time in less than 2 years.

Having been employed by various Wall Street firms over the past 4 decades, I have learned that high priced analysts have the ability to foresee market direction no better than my grandchildren.

Looking back to May 2011, our savvy elected officials increased our debt-ceiling which led to the first ever downgrade of U.S. debt from its top triple A rating from S&P. This caused a very quick 19% decline in the S&P index.  Sound familiar?

It appears that the only time Capitol Hill tries to solve the debt ceiling impasse is when their own portfolio is affected.

This market rally has been led by chatbot affiliated companies. These stocks have added $1.4 trillion in stock market value this year. Keep in mind that just 6 companies were responsible for almost 60% of S&P gains.  These are the 6 leaders: Microsoft, Alphabet, Amazon, Meta Platform, Salesforce and of course, Nvidia.

In the meantime, the Administration states that inflation has been reined in.  What stores are they shopping in? Here is the data release from Washington. Year over Year changes March 2022-March 2023:

• Food and non-alcoholic beverages up 8.1%

• Bread and cereal products up 10.8%

• Meat and seafood up 4.3%

• Electricity up 15.7%

When 1 pound of hot dogs rises from $3.25 to $7.50, that is not 8.1%. When Froot Loops go from $1.89 to $5.14 we are in trouble. The bureaucrats in D.C. make up numbers worse than George Santos.

On a positive note, the flowers are starting to bloom, the grass is starting to grow and we live in a special place. Of historic significance, we happen to be home to the second oldest active Episcopal Church in the United States. This year Caroline Church in Setauket will be celebrating its 300th anniversary. Congratulations.

Michael E. Russell retired after 40 years working for various Wall Street firms. All recommendations being made here are not guaranteed and may incur a loss of principal. The opinions and investment recommendations expressed in the column are the author’s own. TBR News Media does not endorse any specific investment advice and urges investors to consult with their financial advisor. 

Pixabay photo

By Leah S. Dunaief

Leah Dunaief

To see how it worked, when I got the link for ChatGPT, the Artificial Intelligence or OpenAI software sent from my grandson, I asked it to write 650 words about Franklin Delano Roosevelt. Why FDR? I had just finished reading a book about FDR’s “Splendid Deception,” the ways the president hid his polio infirmity from the American public, and he was on my mind. 

So, I am going to let you see what was written immediately, on command, in less than five minutes before my disbelieving eyes. Remember, this was authored by a robot.

President Franklin D. Roosevelt, commonly known as FDR, is remembered as one of the most influential and transformative leaders in American history. During his 12-year tenure as President, he led the United States through some of its most challenging times, including the Great Depression and World War II.

FDR was born into a wealthy family in Hyde Park, New York, in 1882. He attended Harvard University and Columbia Law School before beginning a career in politics. He served as a New York state senator from 1910 to 1913, and later as Assistant Secretary of the Navy under President Woodrow Wilson.

In 1921, at the age of 39, FDR was struck by polio, which left him paralyzed from the waist down. Despite this setback, he remained active in politics and was elected Governor of New York in 1928.

It was during the Great Depression, however, that FDR’s leadership would truly shine. When he took office in 1933, the country was in the midst of the worst economic crisis in its history. Millions of Americans were out of work, banks were failing, and confidence in the government was at an all-time low.

FDR responded to the crisis with a bold and ambitious plan to revitalize the economy, which he called the New Deal. The New Deal included a range of programs and policies designed to provide relief to the unemployed, stimulate economic growth, and reform the financial system.

One of the most famous New Deal programs was the Civilian Conservation Corps, which provided jobs to young men in conservation and reforestation projects. Other programs included the Agricultural Adjustment Act, which sought to stabilize crop prices and support farmers, and the National Industrial Recovery Act, which established codes of fair competition for businesses.

FDR’s New Deal was not without its critics, who argued that it represented an unwarranted expansion of federal power and an infringement on individual liberties. But for many Americans, especially those who were struggling to make ends meet during the Depression, FDR’s bold leadership and willingness to take decisive action were a source of hope and inspiration.

FDR was also a key figure in American foreign policy during World War II. When the war broke out in Europe in 1939, he declared that the United States would remain neutral. But after the Japanese attack on Pearl Harbor in 1941, he led the country into the war with a resolve and determination that would help turn the tide against the Axis powers.

FDR was instrumental in building the alliance with the Soviet Union and the United Kingdom, which would eventually lead to victory in Europe. He also played a key role in the development of the atomic bomb, which would be used to end the war in the Pacific.

Despite his many accomplishments, FDR was not without his flaws. Critics have accused him of overreaching in his use of executive power, and of turning a blind eye to human rights abuses committed by the Soviet Union during the war. But his legacy remains one of visionary leadership and bold action in the face of crisis.

FDR’s famous quote, “The only thing we have to fear is fear itself,” has become a rallying cry for leaders in times of crisis. It reflects his belief that the American people had the strength and resilience to overcome even the most daunting challenges, and that with the right leadership and the right policies, anything was possible.

Today, as the United States faces new challenges, from economic inequality to political polarization to climate change, FDR’s legacy remains a source of inspiration and guidance. His commitment to bold action in the face of crisis, his belief in the power of government to make a positive difference in people’s lives, and his unwavering faith in the American people, continue to resonate with us today.

How do teachers distinguish what was written by a student or by the bot? By the same token, having a bot help in other contexts, such as planning a schedule or working up a travel itinerary can be a huge help. ChatGPT, launched this past November, can analyze content from a wide range of information on the internet and then create original content. But is it always correct information? Will the bot take jobs from humans? Or will it merely offer a baseline of information upon which humans can then add more nuanced input?

Stay tuned.

METRO image

By Daniel Dunaief

Daniel Dunaief

I’m really writing this. Or am I?

Now that I’ve seen artificial intelligence in action, I know that the system, such as it is, can write impressive pieces in much shorter time than it takes me to write a column or even this sentence.

And yet, I don’t want a machine to write for me or to reach out to you. I prefer the letter by letter, word by word approach I take and would like to think I earn the smile, frown or anything in between I put on your face as a result of the thinking and living I’ve done.

However, I do see opportunities for AI to become the equivalent of a personal assistant, taking care of needed conveniences and reducing inconveniences. For conveniences, how about if AI did the following:

Grocery shopping: I’m sure I get similar foods each week. Maybe my AI system could not only buy the necessary and desired food items, but perhaps it could reduce the ones that are unhealthy or offer new recipes that satisfy my food preferences.

Dishes: I’m not looking for a robot akin to “The Jetsons,” but would love to have a system that removed the dirt and food from my dishes, put them in the dishwasher, washed them and then put them away. An enhanced system also might notice when a dish wasn’t clean and would give that dish another wash.

Laundry: Okay, I’ll admit it. I enjoy folding warm laundry, particularly in the winter, when my cold hands are starting to crack from being dry. Still, it would save time and energy to have a laundry system that washed my clothes, folded them and put them away, preferably so that I could see and access my preferred clothing.

Pharmacy: I know this is kind of dangerous when it comes to prescriptions, but it’d be helpful to have a system that replenished basic, over-the-counter supplies, such as band-aids. Perhaps it could also pick out new birthday and greeting cards that expressed particular sentiments in funny yet tasteful ways for friends and family who are celebrating milestone birthdays or are living through other joyful or challenging times.

For the inconveniences, an AI system would help by:

Staying on hold: At some point, we’ve all waited endlessly on hold for some company to pick up the phone to speak to us about changing our flights, scheduling a special dinner reservation or speaking with someone about the unusual noise our car makes. Those “on hold” calls, with their incessant chatter or their nonstop hold music, can be exasperating. An AI system that waited patiently, without complaint or frustration and that handed me the phone the moment a person picked up the call, would be a huge plus.

Optimize necessary updates: Car inspections, annual physicals, oil changes, and trips to the vet can and do go on a calendar. Still, it’d be helpful to have an AI system that recognizes these regular needs and coordinates an optimal time (given my schedule and the time it’ll take to travel to and from these events) to ensure I don’t miss an appointment and to minimize the effort necessary.

Send reminders to our children: Life is full of balances, right? Too much or too little of something is unhealthy. These days, we sometimes have to write or text our kids several times before we get to speak with them live. An AI system might send them a casual, but loving, reminder that their not-so-casual but loving parents would like to speak with them live.

Provide a test audience: In our heads, we have the impulse to share something funny, daring or challenging, like, “hey, did you get dressed in the dark” or “wow, it must be laundry day.” Sure, that might be funny, but an AI system designed to appreciate humor in the moment — and to have an awareness of our audience — might protect us from ourselves. Funny can be good and endearing, but can also annoy.

Ramana Davuluri

By Daniel Dunaief

Ramana Davuluri feels like he’s returning home.

Davuluri first arrived in the United States from his native India in 1999, when he worked at Cold Spring Harbor Laboratory. After numerous other jobs throughout the United States, including as Assistant Professor at Ohio State University and Associate Professor and Director of Computational Biology at The Wistar Institute in Philadelphia, Davuluri has come back to Long Island. 

As of the fall of 2020, he became a Professor in the Department of Biomedical Informatics and Director of Bioinformatics Shared Resource at Stony Brook Cancer Center.

“After coming from India, this is where we landed and where we established our life. This feels like our home town,” said Davuluri, who purchased a home in East Setauket with his wife Lakshmi and their six-year-old daughter Roopavi.

Although Davuluri’s formal training in biology ended in high school, he has applied his foundations in statistics, computer programming and, more recently, the application of machine learning and deep algorithms to the problems of cancer data science, particularly for analyses of genomic and other molecular data.

Davuluri likens the process of the work he does to interpreting language based on the context and order in which the words appear.

The word “fly,” for example, could be a noun, as in an insect at a picnic, or a verb, as in to hop on an airplane and visit family for the first time in several years.

Interpreting the meaning of genetic sentences requires an understanding not only of the order of a genetic code, but also of the context in which that code builds the equivalent of molecular biological sentences.

A critical point for genetic sequences starts with a promoter, which is where genes become active. As it turns out, these areas have considerable variability, which affects the genetic information they produce.

“Most of the genetic variability we have so far observed in population-level genomic data is present near the promoter regions, with the highest density overlapping with the transcription start site,” he explained in an email.

Most of the work he does involves understanding the non-coding portion of genomes. The long-term goal is to understand the complex puzzle of gene-gene interactions at isoform levels, which means how the interactions change if one splice variant is replaced by another of the same gene.

“We are trying to prioritize variants by computational predictions so the experimentalists can focus on a few candidates rather than millions,” Davuluri added.

Most of Davuluri’s work depends on the novel application of machine learning. Recently, he has used deep learning methods on large volumes of data. A recent example includes building a classifier based on a set of transcripts’ expression to predict a subtype of brain cancer or ovarian cancer.

In his work on glioblastoma and high grade ovarian cancer subtyping, he has applied machine learning algorithms on isoform level gene expression data.

Davuluri hopes to turn his ability to interpret specific genetic coding regions into a better understanding not only of cancer, but also of the specific drugs researchers use to treat it.

He recently developed an informatics pipeline for evaluating the differences in interaction profiles between a drug and its target protein isoforms.

In research he recently published in Scientific Reports, he found that over three quarters of drugs either missed a potential target isoform or target other isoforms with varied expression in multiple normal tissues.

Research into drug discovery is often done “as if one gene is making one protein,” Davuluri said. He believes the biggest reason for the failure of early stage drug discovery resides in picking a candidate that is not specific enough.

Ramana Davuluri with his daughter Roopavi. Photo by Laskshmi Davuluri

Davuluri is trying to make an impact by searching more specifically for the type of protein or drug target, which could, prior to use in a clinical trial, enhance the specificity and effectiveness of any treatment.

Hiring Davuluri expands the bioinformatics department, in which Joel Saltz is chairman, as well as the overall cancer effort. 

Davuluri had worked with Saltz years ago when both scientists conducted research at Ohio State University.

“I was impressed with him,” Saltz said. “I was delighted to hear that he was available and potentially interested. People who are senior and highly accomplished bioinfomaticians are rare and difficult to recruit.”

Saltz cited the “tremendous progress” Davuluri has made in the field of transcription factors and cancer.

Bioinformatic analysis generally doesn’t take into account the way genes can be interpreted in different ways in different kinds of cancer. Davuluri’s work, however, does, Saltz said.

Developing ways to understand how tumors interact with non-tumor areas, how metastases develop, and how immune cells interact with a tumor can provide key advances in the field of cancer research, Saltz said. “If you can look at how this plays out over space and time, you can get more insights as to how a cancer develops and the different part of cancer that interact,” he said.

When he was younger, Davuluri dreamt of being a doctor. In 10th grade, he went on a field trip to a nearby teaching hospital, which changed his mind after watching a doctor perform surgery on a patient.

Later in college, he realized he was better in mathematics than many other subjects.

Davuluri and Lakshmi are thrilled to be raising their daughter, whose name is a combination of the words for “beautiful” and “brave” in their native Telugu.

As for Davuluri’s work, within the next year he would like to understand variants. 

“Genetic variants can explain not only how we are different from one another, but also our susceptibility to complex diseases,” he explained. With increasing population level genomic data, he hopes to uncover variants in different ethnic groups that might provide better biomarkers.

Partha Mitra at the Shanghai Natural History Museum in China where he was giving a talk to children on how birds learn to sing.

By Daniel Dunaief

Throw a giant, twisted multi-colored ball of yarn on the floor, each strand of which contains several different colored parts. Now, imagine that the yarn, instead of being easy to grasp, has small, thin, short intertwined strings. It would be somewhere between difficult and impossible to tease apart each string.

Instead of holding the strings and looking at each one, you might want to construct a computer program that sorted through the pile.

That’s what Partha Mitra, a professor at Cold Spring Harbor Laboratory, is doing, although he has constructed an artificial intelligence program to look for different parts of neurons, such as axons, dendrites and soma, in high resolution images.

Partha Mitra at the Owl Cafe in Tokyo

Working with two dimensional images which form a three dimensional stock, he and a team of scientists have performed a process called semantic segmentation, in which they delineated all the different neuronal compartments in an image.

Scientists who design machine learning programs generally take two approaches: they either train the machine to learn from data or they tailor them based on prior knowledge. “There is a larger debate going on in the machine learning community,” Mitra said.

His effort attempts to take this puzzle to the next step, which hybridizes the earlier efforts, attempting to learn from the data with some prior knowledge structure built in. “We are moving away from the purely data driven” approach, he explained.

Mitra and his colleagues recently published a paper about their artificial intelligence-driven neuroanatomy work in the journal Nature Machine Intelligence.

For postmortem human brains, one challenge is that few whole-brain light microscopic data sets exist. For those that do exist, the amount of data is large enough to tax available resources.

Indeed, the total amount of storage to study one brain at light microscope resolution is one petabyte of data, which amounts to a million megapixel images.

“We need an automated method,” Mitra said. “We are on the threshold of where we are getting data a cellular resolution of the human brain. You need these techniques” for that discovery. Researchers are on the verge of getting more whole-brain data sets more routinely.

Mitra is interested in the meso-scale architecture, or the way groups of neurons are laid out in the brain. This is the scale at which species-typical structures are visible. Individual cells would show strong variation from one individual to another. At the mesoscale, however, researchers expect the same architecture in brains of different neurotypical individuals of the same species.

Trained as a physicist, Mitra likes the concreteness of the data and the fact that neuroanatomical structure is not as contingent on subtle experimental protocol differences.

He said behavioral and neural activity measurements can depend on how researchers set up their study and appreciates the way anatomy provides physical and architectural maps of brain cells.

The amount of data neuroanatomists have collected exceeds the ability of these specialists to interpret it, in part because of the reduction in cost of storing the information. In 1989, a human brain worth of light microscope data would have cost approximately the entire budget for the National Institutes of Health based on the expense of hard disk storage at the time. Today, Mitra can buy that much data storage every year with a small fraction of his NIH grant.

“There has been a very big change in our ability to store and digitize data,” he said. “What we don’t have is a million neuroanatomists looking at this. The data has exploded in a systematic way. We can’t [interpret and understand] it unaided by the computer.”

Mitra described the work as a “small technical piece of a larger enterprise,” as the group tries to address whether it’s possible to automate what a neuroanatomist does. Through this work, he hopes computers might discover common principals of the anatomy and construction of neurons in the brain.

While the algorithms and artificial intelligence will aid in the process, Mitra doesn’t expect the research to lead to a fully automated process. Rather, this work has the potential to accelerate the process of studying neuroanatomy.

Down the road, this kind of understanding could enable researchers and ultimately health care professionals to compare the architecture and circuitry of brains from people with various diseases or conditions with those of people who aren’t battling any neurological or cognitive issues.

“There’s real potential to looking at” the brains of people who have various challenges, Mitra said.

The paper in Nature Machine Intelligence reflected a couple of years of work that Mitra and others did in parallel with other research pursuits.

A resident of Midtown, Mitra, his wife Tatiana and their seven-year-old daughter have done considerable walking around the city during the pandemic.

The couple created a virtual exhibit for the New York Hall of Science in the Children’s Science Museum in which they described amazing brains. A figurative sculptor, Tatiana provided the artwork for the exhibition.

Mitra, who has been at Cold Spring Harbor Laboratory since 2003, said neuroanatomy has become increasingly popular over the last several years. He would like to enhance the ability of the artificial intelligence program in this field.

“I would like to eliminate the human proofreading,” he said. “We are still actively working on the methodology.”

Using topological methods, Mitra has also traced single neurons. He has published that work through a preprint in bioRxiv.