Archives: Reg Murphy Pubs

U.S. Gun Deaths Reach an All-Time High

Gun deaths in the United States have reached an all-time high for a second year in a row. 48,830 Americans died from a firearm injury in 2021, according to new data from the Centers for Disease Control and Prevention. These firearm deaths include 26,328 suicides, 20,958 homicides, 549 unintentional gun deaths, 537 legal intervention deaths, and 458 firearm deaths with an undetermined intent. That’s one death every eleven minutes from gun violence in the U.S.

Gun violence in the U.S. is an ongoing public health crisis. Amid the COVID-19 pandemic, there was an unprecedented spike in gun deaths that was largely driven by an increase in homicides. Though most Americans returned to their daily routines, gun deaths continued to increase in 2021.

Gun violence disproportionately impacts men. 85.7% of firearm deaths in 2021 involved a male victim. However, there were significant differences in homicide and suicide rates across various racial and age groups.

In 2021, the U.S. experienced the highest gun homicide rate since 1994. The gun homicide rate in 2021 was up 7.6% from 2020, but there has been a 45% increase between 2019 and 2021. Between 2019 and 2021, gun homicide rates increased 49% for Blacks and 55% among Native Americans. Young Black males were disproportionately the victims of homicides involving a firearm. Black teens and young men accounted for 2% of the total population, but accounted for 36% of all gun homicide fatalities in 2021.

Homicides get a lot of media attention, but suicides make up the bulk of gun deaths in the U.S. Suicides involving a firearm were up 8.3% from 2020. That’s the largest one-year increase in four decades. This was also the highest number of gun suicide deaths ever recorded since the C.D.C. started tracking such data in 1968. White men were overrepresented among gun suicide deaths in 2021. Despite making up only 30% of the U.S. population, white men accounted for 70% of gun suicide deaths. White men over age 65 had a gun suicide rate that was four times the national average.

Gun violence is the leading cause of death for children and teens in the U.S. Since 2020, injuries from firearms have topped accidents and cancer as the leading cause of death for our youth. About two-thirds of gun deaths for children and teens are homicides.

How are things going in Georgia compared to other states? Georgia had an above-average gun homicide rate, the 11th highest in the nation in 2021. Conversely, Georgia has the 25th highest gun suicide rate in the nation, just above the national average in 2021.

Gun sales have doubled in the U.S. over the past decade, surging in 2020. Firearms are often marketed as a way to protect oneself from threats. Conversely, gun ownership greatly increases one’s risk of dying by suicide or homicide, according to a meta-analysis of existing peer-reviewed research on the topic that was published in the Annals of Internal Medicine. The availability of firearms in our nation is a central reason why the U.S. has a higher rate of gun deaths than other developed nations.

The Johns Hopkins University Center for Gun Violence Solutions proposes a range of evidence-based, equitable policy recommendations to reduce gun deaths, including: legislation that enacts permit-to-purchase laws; red flag laws permitting removal of firearms from high-risk individuals; child access prevention laws to reduce gun accidents involving children; laws that restrict open carry in public places; laws placing restrictions on concealed carry for those with criminal records; laws banning concealed carry of firearms at specified, sensitive places; repealing stand your ground laws; and investing in community violence intervention programs.

Increasing rates of firearm ownership, high levels of gun sales, and a political climate that is unreceptive to restrictions on firearms all but ensure that deaths from firearms will remain a significant public health crisis for years to come.

Roscoe Scarborough, Ph.D. is interim chair of the Department of Social Sciences and associate professor of sociology at College of Coastal Georgia. He is an associate scholar at the Reg Murphy Center for Economic and Policy Studies. He can be reached by email at rscarborough@ccga.edu.

Our Capitalist Heritage

Edmund Burke (1729-1797), the great political rhetorician and British statesman, understood the civilization that British colonial America had become.

In 1769, three years into what would be a 28-year stint as a member of the House of Commons, Burke published a pamphlet in which he noted: “The pride and strength of the Americans is their trade.  A perfectly unimpeded commerce seems to them inseparable from liberty.”  

Six years later, Burke attempted to persuade his fellow MPs to do whatever was necessary to placate the American colonists and avoid war.  His unsuccessful but now famous “Speech on Conciliation with America,” delivered on March 22, 1775, contained this:

“America – which at this day serves for little more than to amuse you with savage men and uncouth manners; yet shall before you taste of death, show itself equal to the whole of that commerce which now attracts the envy of the world.  Whatever England has been growing to …in a series of Seventeen Hundred years, you shall see as much added to her by America in the course of a single life.”

Burke recognized that a zeal for enterprise and commerce was fundamental to who colonial Americans were and what their America was about.

It may seem surprising that what an Irish-born British statesman identified as fundamental to British colonial America is just as fundamental to the United States today.  Surprising, that is, until we recall that by July 4, 1776, the market-based American economy had been growing, developing and spreading for 169 years.

 British colonial America was capitalist from the get-go.  It did not develop according to a plan.  It developed according to the profit motive.

The idea of establishing colonies in America came not from the British crown or the British Parliament, but from profit-seeking British entrepreneurs and investors.  Joint-stock companies established the first and the bulk of the British colonies in America.

A joint-stock company was a form of business partnership.  It required a royal charter, but the partners financed the venture and assumed all risk.

Jamestown was a business venture of the Virginia Company of London.  Plymouth, of Pilgrim fame, was a venture launched by a group of 70 investors led by Thomas Weston.  Massachusetts Bay was launched by the Massachusetts Bay Company.

Colonists were wired into producing for the market from the outset.  Profits to the joint stock companies, and to the colonists themselves, came from the sale of commodities produced by the colonists to buyers in Britain and elsewhere.  Which commodities would be most profitable was left to the colonists – and the market – to determine.     

British commercial law was employed immediately, though with some modifications.  Land tenure law was a crucial modification.  The British Crown allowed only one type of land tenure in colonial America: free and common socage.  Meaning: land purchased from the Crown is owned free and clear.

Colonial Americans made enterprising use of the arrangement.  They scattered and settled wherever they found productive land.  They also bought and sold land as a speculative investment.  Land speculation was a leading industry in the colonies almost from their beginning.

The growth and development of British colonial America had an ironic consequence.  The joint stock companies established colonies in America to produce commodities for export.  Yet, in short order, most colonists found the domestic market more lucrative than the export market.  Economic historians estimate that, in the 169 years from Jamestown to 1776, less than five percent of colonial American output was exported.

The British colonial America that Edmund Burke wrote and spoke of was a well-established capitalist civilization.  On July 4, 1776, that well-established capitalist civilization became a country.    

Should smartphones be banned from classrooms?

I have always allowed students to use electronic devices in my classes with two rules: 1) for those who needed a distraction-free environment, no electronic devices were allowed on the front row of the classroom, and for everyone else, 2) just don’t be annoying. But, following the advice of some of my wiser colleagues, I tried something new in my face-to-face classes last semester: I banned the use of cell phones during class.

My goal was to increase student engagement in class discussion and improve student retention of material. I will tell you how it went at the end of this article, but first, some data on the topic.

Last week, on an episode of the NPR radio show On Point, host Meghna Chakrabarti and her guests discussed a new Florida law allowing K-12 teachers to ban smartphones in their classrooms. The show’s interviewees cited peer-reviewed studies that showed students who text during class take significantly fewer notes, remember less of the material presented in class, and score lower on tests. Interestingly, they discuss research that finds the effects of smartphone use in class are greater on long-term memory than on short-term memory. Students who are texting may be able to answer questions at the end of the class but are likely to have forgotten the material before an exam.

These results make banning smartphones from classrooms seem like a no-brainer for students’ academic success.

As many callers into the show pointed out, however, this is a much more complicated issue. A 2023 meta analysis of 20 studies on smartphone use in education found overwhelming evidence that intentional use of smartphones by instructors in class can have very positive effects on learning outcomes.

One of the identified effective uses of smartphones in class – gamification of learning – is used widely on Coastal’s campus with great reviews from both professors and students. Students use their phones in class to answer questions in real time, keeping them engaged in learning throughout class time. In my principles of microeconomics classes, students use a smartphone app to participate in simulated markets and other life-like scenarios that demonstrate theories we have or will discuss. Feedback from students on these sorts of exercises is almost always positive, and they report that the games improve their retention of course material and understanding of real-world applications.

As with most things in life and education, I think the keys to effectively managing the relationship between classrooms and smartphones are intentionality and balance. The studies mentioned above indicate that there is a sweet spot of integrating into lesson planning the creativity of smartphone apps and the power of a web full of information at students’ fingertips while curbing unrelated texting or scrolling social media.

My sense is this balance may be easier to achieve in a K12 classroom than in college. On Point interviewees talked about the Fear Of Missing Out (FOMO) effect experienced by children when their phones are within their sight and discussed the merits of physically removing phones except during times when the teacher has planned to integrate the technology into their teaching. This is simpler to navigate with children than with adult learners. College students often have families, jobs, and other responsibilities that they may legitimately prioritize over my class, and for them, FOMO may be very valid anxiety over missing an important message.

I asked my students not to use their phones during class, but I did not feel comfortable asking them to surrender their phones or even to put them out of sight. Thus, my no-phones policy was difficult to enforce uniformly. I will give it more thought before August, but in my classroom, any gains in engagement were slight and likely not worth the stress of cellphone policing.

———–

Dr. Melissa Trussell is a professor in the School of Business and Public Management at College of Coastal Georgia who works with the college’s Reg Murphy Center for Economic and Policy Studies. Contact her at mtrussell@ccga.edu. The views expressed in this article are those of the author and do not necessarily represent those of the College of Coastal Georgia.

Mitigating Flood Risk in Vulnerable Communities

Hurricane season started on June 1st. This leads many people in coastal areas to reflect on flood risk. According to the Federal Emergency Management Agency, flooding causes 90% of disaster damage every year in the U.S. Not surprisingly, some groups are at a greater risk than others. Studies show that rising global temperatures put even more people at risk of flooding, including households in areas that are far from the coast and areas that have no recent flood history.

The storm surge associated with hurricanes and nor’easters bring extreme flood risk. The Golden Isles is fortunate that hurricanes are relatively uncommon here. It’s been 125 years since our area has experienced a major hurricane. In 1898, a category 4 hurricane made landfall at Cumberland Island with 135 mph winds. A 16-foot storm surge was recorded in Brunswick.

Would your residence flood in a major hurricane? Mine would. How high above sea level is your property? Google “Glynn County Web Flood Maps” to find out. A large percentage of properties in Glynn County and Brunswick would flood in a category 1 or 2 hurricane, especially if the storm coincides with a high tide.

Golden Isles residents know that our area is prone to other types of flooding, including river flooding, sunny-day tidal flooding, and urban flooding from heavy rains. Risk is heightened in the absence of protective infrastructure and adequate stormwater systems.

Certain properties are at risk based on their location, but certain groups are at a greater risk of harm from flooding. Those at greatest risk of harm from flooding are lower-income, older, and other vulnerable populations who live in low-lying areas.

57% of the population is not prepared with food, water, transportation, and emergency funds to withstand a disaster, according to research by Smitha Rao and colleagues. Statistically, households that are lower-income, housing-insecure, led by a woman, and households with children are less prepared for a disaster.

Older adults who live in flood-prone areas are at higher risk due to health needs or disabilities that can affect their ability to evacuate. They are more likely to be socially isolated, often lacking help to prepare for a storm, evacuate, or access resources for post-flood recovery.

Households that are struggling to make ends meet are less likely to be prepared for the next flood or other disaster. These folks often lack the means to evacuate to avoid a disaster, lack the resources to repair and replace property damaged by flooding, and are less likely to return if displaced by flooding.

Making matters worse, the vast majority of American households at risk of flooding do not have flood insurance. As homeowner’s insurance does not cover flood damage, those without flood insurance are at risk of financial ruin if their home floods.

Many river levees, retention ponds, and stormwater systems in the U.S. are nearing the end of their useful life or are already beyond it. Even worse, most of our nation’s flood control infrastructure was designed for 20th century storms and flooding. A warming climate leads to extreme wet and dry conditions that have increased in duration, extent, and severity. It is essential to incorporate climate change into planning for stormwater systems and other flood prevention measures.

Long-term solutions are required to address flood risk. Protecting or rehabilitating dunes, wetlands, mangrove forests, and coral reefs mitigates coastal flooding. Rain gardens and bioswales can reduce runoff that contributes to urban and river flooding. Updating or building levees, wave attenuation devices, and seawalls are costly solutions that can protect property and people in certain settings.

A key long-term solution is expanding access to safe and affordable housing. State and local governments can buy properties that frequently flood or change zoning rules to prevent people from moving into harm’s way. Additionally, new developments should be planned for future flood risk in our changing climate.

Roscoe Scarborough, Ph.D. is interim chair of the Department of Social Sciences and associate professor of sociology at College of Coastal Georgia. He is an associate scholar at the Reg Murphy Center for Economic and Policy Studies. He can be reached by email at rscarborough@ccga.edu.

Socialism Abounds in Irony

I’m sure most people presume that the history of economics must be boring pushed to a life-draining extreme.

In fact, the history of economics is a grand story, full of drama, tension and surprising twists; ideas come to life as characters, and the thinkers behind the ideas complete the cast.  Granted it’s thin in romance and razor thin in the steamy stuff.  But it’s thick in irony.

The history of socialist thought is a treasure trove of irony. 

For starters, the bulk of 200-plus years of Continental European, British and American socialist writing is about capitalism, not socialism.  The critique of capitalism dominates this massive literature; it is and always has been central to socialist thought.  Socialist writers have also had something to say about society, human nature and fellow socialists.  They have had little to say about socialism.    

That’s observation, not criticism, and the observation is not controversial.  Historians of socialist thought as erudite and diverse as George Lichtheim, Robert Heilbroner and Michael Newman have all made note of how much socialist thought has to say about capitalism and how little it has to say about socialism.

Karl Marx and Friedrich Engels, the two most renowned socialists of all, provide a prime example of the irony.  Their critique of capitalism is fierce and voluminous.  Their critique of other socialists is fierce and extensive.  How economic decisions would be made in a socialist system?  Crickets.

Marx and Engels stated what every socialist understood then and understands now: that socialism is a system in which private enterprise is prohibited.  Beyond that, Marx and Engels said nothing about socialism as a system.

Prominent in the history of socialist thought is the charge that capitalism is grounded on individualism and competition, whereas socialism is grounded on solidarity and cooperation.  What makes this ironic is that socialists have a long history of being at odds with each other.  Numerous socialist writers have commented on the irony, including British socialist Anthony Wright.

Wright’s excellent book, “Socialisms,” published in 1986, opens with this: “The history of socialism is the history of socialisms.  Moreover, it is a history not of fraternal plurality, but of rivalry and antagonism.”  A sentence later: “Many socialists have reserved their sharpest arrows for attacks on other socialists.”

Marx and Engels are prime cases of this irony, as well.  The vitriol with which Marx and Engels laid into socialists who failed to toe the Marx-Engels line is eclipsed only by that of V.I. Lenin, when, in his “Conditions of Admission to the Communist International,” he “declared war on the whole bourgeois world and on all Yellow social-democratic parties.”  By “all Yellow social-democratic parties,” Lenin meant all socialist parties not in lockstep with Soviet communism.

Socialist thinkers have consistently championed the “working class.”  Of the many moral outrages that socialists find in capitalism, the exploitation of the working class tops the list.  Socialists have also claimed that environment determines perspective: grow up bourgeois and you’ll interpret life from a bourgeois perspective; grow up working class and you’ll interpret life from a working-class perspective.

The irony is that working class championing socialist thinkers, almost to a person, have come from thoroughly bourgeois backgrounds, while members of the working class have shown little interest in socialism but great interest in being able to live bourgeois lives for all the work they contribute to bourgeois capitalist production.

The unwillingness of the working class to shake its “working-class style bourgeois” mindset has frustrated socialist thinkers for a long time. So, next time you have a hankering for irony, there’s a treasure trove in the history of socialist thought just waiting for you.

Young adults are living at home longer. Is it paying off?

For educators, the month of May is always an exciting time. We get to celebrate the most important part of our jobs: helping our students reach their goals. For many students across our region, the big goal this month is graduation from high school or college. And, many parents are celebrating the bittersweet milestone of seeing their children move out on their own for the first time.

To those parents for whom it is more bitter than sweet, I have some good news. For the rest of you, let’s just hope your family can buck the trend.

Young adults are moving back in with their parents in greater numbers than ever before.

During the Covid-19 pandemic, a record number of young adults moved back in with their parents. In July 2020, for the first time since the Great Depression, more than 50% of adults aged 18 to 29 lived with their parents. Many of these young adults reported relocating for pandemic-specific reasons—college campuses closing or loss of employment due to the pandemic economy. As one might expect, then, most of them were younger, traditional college students or entry-level workers.

But, even in a slightly older cohort of young adults between the ages of 25 and 34, the number of folks living with their parents was substantial: 18% in 2020.

Post-pandemic, that number has decreased only slightly, to 15.6% in 2022.

So, if they went home because of the pandemic, why aren’t they leaving now that colleges have reopened, the economy has rebounded, and health risks have subsided?

It turns out that the trend of young adults, ages 25 to 34, living with their parents longer or moving back in later predates the pandemic. The peak of 18% in 2020 was only 1% higher than in the previous year. Contrast this with 10% in 1984 and 10-12% throughout the 1990s and until 2008, when the number begins its steady increase to where we are today.

It seems we are looking at more of a Great Recession effect than a pandemic effect.

Regardless the cause, it is equally interesting to consider the effects of having so many young adults still living with their parents. And on this, the pros and cons are both strong. In fact, I first started thinking about writing on this topic in February, when I read the following two headlines in Fortune within a couple weeks of each other: On January 23, “Millennials and Gen Z living at home are a ‘train wreck’ thanks to their parents …” and on February 16, “Millennials’ decision to live with their parents has paid off…”

Which is it—are they train wrecks, or are their decisions paying off?

The “train wrecks” article quotes Dave Ramsey, who is concerned about the record high spending on designer accessories among this group. It cites a Morgan Stanley report that young adults (ages 18 to 29) are a growing market for luxury goods in the U.S. Ramsey’s colleague Jade Warsaw criticizes parents for allowing this sort of luxury spending by young adults living at home with debt and low-paying jobs.

On the other hand, the “decision… has paid off” article reports that 6% of individuals with student debt were able to make substantial progress toward it paying off because they moved home. Data from the National Association of Realtors suggests, too, that given soaring rent and housing prices, staying home has helped young adults save toward down payments on their own homes. In 1995, 15% of first-time homebuyers moved into their homes from a family member’s house. Today, that number is 27%.

Surely, the world is very different for today’s young adults than for generations past. College and housing are both far more expensive than before. In many ways, staying home is the financially savvy thing to do. Though, buying that Rolex watch may not be the best use of the money saved through that choice.

So, I don’t know; maybe both article titles are true. Maybe they’ll save enough to move out and then wreck the train with their luxury spending habits. Or maybe they’ll move out and figure it out like so many before them have done.

————-

Dr. Melissa Trussell is a professor in the School of Business and Public Management at College of Coastal Georgia who works with the college’s Reg Murphy Center for Economic and Policy Studies. Contact her at mtrussell@ccga.edu. The views expressed in this article are those of the author and do not necessarily represent those of the College of Coastal Georgia.

Economic Discounting and Environmental Decision-Making

Like most people, I have been known to procrastinate on tasks and projects that are not at the top of my list of “interesting things to do.” It is not because I am unaware of the consequences of putting these things off. Rather, I am aware that putting the task off is a bad idea, but I do it anyway. Doing something that we know will have negative consequences is essentially an irrational choice.

Psychologists at the Procrastination Research Group at Carleton University explain, however, that in the immediate present, putting off a task provides relief – a small, though temporary, reward. Our inclination to prioritize immediate needs over future ones is a great example of present bias. Behavioral decision-making expert Hal Hershfield has found that on a neural level, we think about our “future selves” in an abstract way rather than a personal way. When we put things off, our brains actually think that the consequences are somebody else’s problem – the problem of the “future self.”

Much like procrastination, our tendency to bias the present and discount the future can influence not just small tasks but larger decision-making tasks. This is one factor that influences our decisions around climate change. Yes, there are politics, science, economics, information, and misinformation involved, but there is also the basic fact that addressing issues of climate change means changing the status quo of how we live and consume. For many people, and certainly many decision-makers, this is an unpleasant task that would be better dealt with in the future. We have a neural tendency to perceive issues like climate change as the problem of our future self – a relative stranger in the scheme of things. 

Like procrastination, the act of economic discounting in decision-making has an impact on the future. Economic discounting refers to the idea that future costs and benefits are worth less than current costs and benefits. This means that if we are faced with a decision that will have costs or benefits in the future, we should discount those costs or benefits to reflect the fact that they are not worth as much as the costs or benefits that we experience today.

In the context of climate-related decisions, the assumption is that the costs of reducing greenhouse gas emissions today are greater than the benefits that will be realized in the future. Therefore, according to the logic of economic discounting, it makes more sense to wait and deal with the problem in the future when the costs will be lower and the benefits will be higher.

One problem with this is that economic discounting assumes that future generations will be better off than the current generation. On the contrary, a recent survey by the Wall Street Journal and the National Opinion Research Center found that three quarters of survey respondents did not think their children were likely to be better off than they are economically. Meanwhile, analysts at Morgan Stanley found in 2021 that, the “movement to not have children owing to fears over climate change is growing and impacting fertility rates quicker than any preceding trend in the field of fertility decline.”

How might we change our environmental decision-making in a way that considers the long-term impact of our actions? This requires a shift in thinking from the short-term to the long-term benefits of our actions. One way to address the limitations of economic discounting is to use alternative decision-making frameworks, such as intergenerational equity. Intergenerational equity is the idea that each generation has an equal right to the use and enjoyment of the environment – to the ecosystem services we enjoy. This idea has long been part of the democratic underpinnings of the Iroquois (Haudenosaunee Confederacy) through their commitment to the 7th generation concept. In their decision-making, they consider the 7th generation coming after and remember the seventh generation who came before allowing them to see benefits beyond the present moment.

Perhaps this shift in thinking might serve to rewire the neural patterns that make us see our future self as a stranger and instead bring future generations into our present-day thinking.

Dr. Heather Farley is Chair of the Department of Criminal Justice, Public Policy & Management, Assistant professor of Public Management in the School of Business and Public Management at College of Coastal Georgia, and an environmental policy researcher. She is an associate of the College’s Reg Murphy Center for Economic and Policy Studies. The opinions found in this article do not represent those of the College of Coastal Georgia.

Why is Teen Mental Health Declining?

Teen mental health is poor and has been on the decline for decades. In my last column, I described U.S. teens’ declining mental health, increasing thoughts of suicide, and increasing attempts at suicide. In today’s column, I explain why teen mental health is so bad today.

Adolescent mental health and well-being deteriorated between 2011 to 2021, according to data from the most recent Youth Risk Behavior Survey from the Centers for Disease Control and Prevention. In 2021, 42% of high school students reported feeling so sad or hopeless that they could not engage in their regular activities for at least two weeks during the year. 22% of high school students seriously considered suicide and 10% reported attempting suicide during the last year. The statistics are even worse for teen girls.

No single theory can explain declining mental health among teens. However, the rapid decline in mental health among teens can likely be attributed to sociocultural changes in how young people are interacting with others and their environment.

Stressful life events are correlated with depression and many other mental health conditions. Physical or emotional abuse, dropping out of school, a parent losing their job, financial difficulties at home, parental divorce, exposure to suicide, illness, or loss of a loved one are all correlated with poor mental health outcomes. The stressors associated with the COVID-19 pandemic further exacerbated these existing issues.

Parenting also plays a role in teens’ mental health. Low levels of parental warmth, perception of being rejected by parents, and weaker support from parents are all correlated with poor mental health among youth. Declining marriage rates and high rates of single parenting put kids at risk of childhood poverty, worsening their risk of poor mental health outcomes.

Relationships at school and with peers often impact children’s mental health, including dealing with unresolved grief, interpersonal disputes, and challenges transitioning into new social roles. Many of these are perennial challenges associated with coming of age.

Is teen mental health declining because of smartphone usage, social media, or the internet? There is always fear that new technologies will ruin our youth.

Mass and social media consumption are correlated with mental health outcomes. Social scientists have linked internet or social media use with depression and eating disorders. Similarly, research has documented that exposure to unrealistic beauty standards in media is correlated with a range of mental health challenges, especially for girls.

Smartphone or social media usage itself may not be the cause of teens’ poor mental health. The popularity of social media is symptomatic of a society in which face-to-face community has atrophied. Young people are spending less time with other young people than prior generations. To make matters worse, teens’ social support systems further eroded during the pandemic. Browsing social media or texting with friends does not yield the same mental health benefits as face-to-face interaction with one’s peers.

Alternatively, declining social stigma associated with mental illness and suicide may result in more teens self-reporting these challenges. Compared to prior generations, it may be that Gen Z is simply more comfortable self-reporting mental health challenges, thoughts of suicide, and suicide attempts.

There is no singular cause of declining mental health among teens. As a sociologist, I tend to prioritize institutional or cultural explanations of behavior. These seem apt to explain the rapid deterioration of teen mental health.

Fortunately, this means that institutional reforms can reverse declines in teen mental health. Increasing access to mental health services is essential. Other practical reforms include training teachers in trauma-responsive classroom strategies, expanding the focus on mental health in the K-12 curriculum, and expanding public health efforts to promote mental health and connect those in need with available resources.

Roscoe Scarborough, Ph.D. is interim chair of the Department of Social Sciences and associate professor of sociology at College of Coastal Georgia. He is an associate scholar at the Reg Murphy Center for Economic and Policy Studies. He can be reached by email at rscarborough@ccga.edu.

The Invisible Hand that Never Was

Studying the history of economics taught me a priceless lesson: there’s no substitute for the horse’s mouth.

Many thousands of people have been taught that economics was born in 1776 with the publication of “The Wealth of Nations,” the book in which Adam Smith expounded his theory that the free market always works, the theory he named the “invisible hand of the market.”

It’s a strange teaching.  For nowhere in “The Wealth of Nations” does Smith refer to any theory as the “invisible hand of the market.”  Nowhere in the book do the words “invisible hand of the market” even appear.  Only once in Smith’s tome of more than a thousand pages do the words “invisible hand” appear.  They appear as a metaphor, not for a market or any feature of markets, but for the desire to reduce risk in the specific case of a merchant deciding to employ his capital at home rather than abroad.

Smith was an exceptionally careful writer who lectured on rhetoric as a professor and wrote on the proper use of metaphor.  If Smith had wanted to advance a theory that he identified by “the invisible hand” metaphor, he would have said so clearly and explicitly, and more than once.  He didn’t.  Not once.

Nowhere in “The Wealth of Nations” does Smith argue anything comparable to “the free market always works.”  Smith argued that self-interested behavior in markets often, inadvertently, benefits society.  Smith also argued that self-interested behavior in markets often benefits society more than behavior intended to benefit society.

But Smith was anything but dogmatic.  The first three sections of “The Wealth of Nations” alone offer 70 examples of self-interested behavior that inadvertently harms society.

So, here we are.  Academics routinely teach that the invisible hand metaphor carried special meaning for Smith, though it didn’t, and that the special meaning was the free market always works, a view Smith never held.     

Who is responsible for this absurdity?

For starters, Paul Samuelson.

Paul Samuelson received the Nobel Prize in economics in 1970 for analyzing economic life with rigorous mathematical theorizing.  According to Samuelson’s rigorous mathematical theorizing, markets are efficient only under conditions that are impossible in real life.     

Samuelson was not the first to misrepresent Adam Smith.  But compared to Samuelson’s, earlier misrepresentations were bush league.  

Samuelson’s misrepresentation appeared first in 1948 in a textbook he wrote for a basic college economics course.  On page 36, we read: “…he (Adam Smith) was so thrilled by the recognition of order in the economic system that he proclaimed the mystical principle of the “invisible hand”: that each individual in pursuing his own selfish good was led, as if by an invisible hand, to achieve the best good of all, so that any interference with free competition by government was almost certain to be injurious.”

Samuelson’s textbook, still in print in its 19th edition, has sold four million copies.  The bogus invisible hand story got more bogus with each edition. 

Why Samuelson misrepresented Smith is unclear.  His distorted Smith certainly served as a foil to advance Samuelson’s own position about markets, but speculating on motivation is best avoided.    

Over the years, most economists accepted Samuelson’s misrepresentation, and most still do today.  This is for two reasons.  One is Samuelson’s exalted standing in economics.  The other is, few economists have any interest in the history of their discipline.  For every economist who has read the Wealth of Nations, there are thousands who haven’t. Historians of economics have been calling out bogus invisible hand stories for decades.  The audiences have been very small.

College Teacher Candidates Becoming Trauma-Responsive

The American Psychological Association’s website defines trauma as “an emotional response to a terrible event.” Terrible events can be abuse or neglect but also can include societal-level events such as pandemics, natural disasters, and systemic discrimination. Nearly every school-aged child today has experienced trauma in one form or another.

Last month, my colleague Dr. Roscoe Scarborough used this column to highlight the mental health crisis currently facing youth in America. Much of this crisis is due to trauma and to our generally being ill-prepared to help children process and heal from trauma. Even those who are trauma-informed (know about trauma) often are not trained to be trauma-responsive (know what to do about it).

It is a big deal, then, that College of Coastal Georgia’s Department of Education and Teacher Preparation recently entered into a partnership with local non-profit Hope 1312 Collective that will equip teachers to begin addressing this crisis.

On March 17, Coastal’s senior cohorts of elementary and middle-grades teacher candidates participated in 8 hours of Trust Based Relational Intervention (TBRI)® trauma-responsive classrooms training. This first annual training was provided by Hope 1312 Collective and sponsored by College of Coastal Georgia and Communities in Schools.

Traditional disciplinary models use punitive measures with the aim of getting immediate compliance. These models are short-sighted, do not respect the biological needs of children, leave us emotionally and often physically disconnected from our students, and do not lead to lasting change. TBRI offers strategies that are biologically respectful, see and meet needs behind behaviors, encourage connection, and promote healing rather than re-traumatization.

TBRI was developed by the Karyn Purvis Institute of Child Development (KPICD) at Texas Christian University and is described by KPICD as “An attachment-based, trauma-informed intervention that is designed to meet the complex needs of vulnerable children.” TBRI addresses the five B’s of relational trauma: brain, biology, body, beliefs, and behavior. The trauma-responsive classrooms curriculum begins with a deep dive into the ways that trauma affects a child’s brain development and their biological response to stress or fear. Teachers are best able to respond to their students’ trauma-related behaviors if they understand what is going on inside the brain and biology contributing to the behavior.

Teacher candidates then learned about the importance of making sure the body’s nutritional and sensory needs are met so that their students are physically capable of learning, growing, and making good choices. Children who have experienced trauma are especially likely to have sensory needs unlike those of their other classmates – need for movement or touch, sensitivity to light or sounds, etc. Understanding how to recognize and respond to a child experiencing sensory deficit or overload helps a teacher see the need behind behavior that may seem willfully noncompliant but is instead the body’s or brain’s way of communicating the need.

Finally, participants learned several practical tools for correcting behavior in ways that encourage connection with students and that reflect a child’s preciousness rather than reinforcing negative belief systems caused by trauma (e.g. feeling unworthy or like their voice does not matter).

When surveyed, fewer than half of participating teacher candidates (7 of 16) said they feel behaviors are managed effectively at the schools where they are currently student-teaching. And after the training, students felt better equipped personally to respond to a traumatized child. Moreover, all agreed TBRI would make a lasting impact within the school system.

Data shows they are right. TBRI does make a lasting impact within the school system. A recent study of TBRI implementation in a school in Tulsa found an 18% decrease in behavioral incident reports after 2 years of TBRI and a 23% decrease in office referrals for students who had been referred 3 or more times prior to TBRI implementation.

Given the right positive influences, the brain has the ability to heal from the effects of trauma. TBRI gives Coastal’s teacher candidates the tools to be part of that healing.

————-

Dr. Melissa Trussell is a professor in the School of Business and Public Management at College of Coastal Georgia who works with the college’s Reg Murphy Center for Economic and Policy Studies. Contact her at mtrussell@ccga.edu.