The Real Cause of America’s Declining Labor Participation Rate? Boys and Their Joysticks

A wily and widespread addiction has caused a massive epidemic among young men — one so bad that they are no longer working. This addiction has a name: video games. That’s right, video games have sapped America’s male youth of its ability to be productive, to function eight hours a day at a job. Their brains are fried.
That’s what you would conclude from media reports on a study titled “Leisure Luxuries and the Labor Supply of Young Men,” which states that between 2000 and 2016, young men have put a premium on leisure accounting for 23 to 46 percent of the decline in their market work.
The reason, according to the study’s authors: Young men would rather play video games.
The four researchers conducting the study found that young men worked 12 percent less time in 2012-2015 than in 2004-2007. At the same time, they dedicated 2.3 hours more to leisure activities. Eighty-two percent of that extra leisure time went to recreational computing and video gaming.
By comparison, men 31-55 only decreased their hours worked by 8 percent over the same period, but without the commensurate uptick in video game playing.
This is where that chicken and egg question gets cracked, and columnist James Pethokoukis concludes that “America faces a massive array of daunting economic challenges but Overwatch, Final Fantasy, and Call of Duty are not among them.”
First of all, it’s a red flag that the big gaps in hours and employment between younger and older men emerged during the Great Recession and Not So Great Recovery. There are lots of potential non-video-game explanations for this. For instance, employers might have started demanding more education or experience before hiring during a time of economic tumult. …
The big jobs event in 2007 wasn’t the release of Halo 3. It was the start of a severe economic downturn.
If the recession and recovery played a big role in young men working less, then work rates should improve the further we move into the economic expansion. And that’s exactly what seems to be happening.
The employment-to-population ratio — the share of a particular population with a job — for 20- to 24-year-olds fell to 61.3 percent in 2010 from 72.7 percent in 2006, the last full non-recession year. But that number has since rebounded to 66.2 percent. Is video game quality suddenly getting worse?”
Obviously, the answer to that question is no. Even the study’s authors note that since the economic recovery kicked in, total leisure time enjoyed by non-employed young men fell five hours per week between 2012-2015.
So if young men are not working and not playing (and not in school and not caring for children, say the authors), what are young men doing? Maybe looking for work? Or maybe they’re doing chores for their parents since the percentage of young men living with a close relative between 2000 and 2015 increased by 12 points.
That’s a nice thought, though it is not the answer, according to the study’s authors.  Not under consideration in the analysis: time spent on Facebook or web browsing. Also not included in the analysis, how many people are multi-tasking: playing on a computer game while riding the bus, for instance.
Even if men aren’t working, they don’t seem too upset about it. Surveys find that 21-30-year-old men were also 7 percentage points happier than men of their age in the early 2000s. Why? Well if you’re not working and you live in your parents’ basement, you probably have few cares. Voìla, instant satisfaction.
Pethokoukis notes that “gamers can still be workers,” and workers are still in demand even as the labor force participation rate for young men is decreasing. And that’s all the more reason to ask what is motivating younger workers to sit out the jobs. The answer is not conclusively video games.

Agree to Disagree in a Constructive Way

Seems likes it’s becoming increasingly more difficult in the current political climate to “agree to disagree.” But can we disagree in a way that’s not destructive? Can we at least try to not be downright contemptuous to those with opposing views?

That’s the question being discussed by economist Arthur Brooks, who says politicians, in particular, are creating the climate of contempt. And the damage is being hoisted upon the average American.

“We have leaders who are encouraging us as citizens to treat each other with contempt,” Brooks, president of the American Enterprise Institute, said during a recent Facebook Live discussion from the Aspen Ideas Festival, an annual event held by the Aspen Institute in Colorado. “That’s a really dangerous business, building power on the basis of contempt and division. …

“The most destructive way to disagree is to treat your interlocutor with contempt. We have to get out of that particular habit. We have to demand leaders aren’t going to do that,” he said.

Sociologists describe contempt as a phenomenon in which individuals hold the conviction that other people are utterly worthless. It’s more insidious than disagreement or even anger, Brooks says.

“Anger you get over … contempt you don’t. If I treat you as a worthless human you’re never going to forget that,” he said, citing the work of marriage counselor John Gottman, who can watch a couple on a video for five seconds without the sound on and predict with 94 percent accuracy whether they will stay together or divorce based on physical expressions of contempt.

Nationally, 86 percent of Americans say they believe the country is more politically divided than in the past, according to the Pew Research Center. That’s the highest percentage ever to give that response since the question was first asked in 2004. At the same time, A CBS poll said a majority are optimistic that Americans of different political views can come together and work out their differences.

Brooks said that Americans in general have long been able to hold political disagreements and still treat each other respectfully.

“We all love somebody who doesn’t agree with us politically,” he said.

The obsession with national politics not only is not what the Founding Fathers envisioned, but also is to blame for the cult-like partitioning of Americans into political tribes. Fortunately, many political leaders at the state and local level on both sides of the aisle are solving problems without the distraction of creating heroes and villains.

Brooks says it comes down to being able to “disagree better.”

“The positive change starts with us.”

Do you think that Brooks is correct, and can anything be done to improve the divide?

Watch the video to hear more of Brooks’ views on the political climate and free enterprise as well as how he went from a classical musician to a world-renowned economist and researcher on happiness.

Underappreciated: Veterans’ Contributions to America After Military Service

Do you know a veteran? If you don’t, you are not alone. Sixteen million-strong in the Greatest Generation, just about all Americans knew a veteran following World War II. They were perceived as the most honorable among us, and as a result they were revered and studied for their character traits.

That has changed, according to Gary Schmitt and Rebecca Burgess, director and program manager, respectively, of the Program on American Citizenship. The Greatest Generation is dying and the new generation of service members is a much smaller group than it used to be.

As a result, Americans don’t know a veteran anymore, not like back in the day. This unfamiliarity has led to a decline in appreciation of veterans’ contributions, and the repercussions are not good.

We now tend to view (veterans) in a bipolar way, either as heroes or victims. Around half of Americans who see a homeless man believe he’s a veteran, one study found — they’re wrong 90% of the time — yet they also rush to thank veterans for their service.

Americans, in other words, don’t understand veterans. This is partly due to the professionalization of the military. In 1973 the federal government ended conscription and established the all-volunteer force. As the population grew and the military drastically shrank, the military-civilian divide grew wider and became self-reinforcing. Today, the child of a career-military parent is six times as likely to make the military his career, while less than 1% of Americans serve. Veterans are often assumed not to be representative of America at large.

The distorted view of veterans is unfortunate, particularly because veterans’ contributions to our civic culture today are likely disproportionately higher than society’s as a whole. Limited data suggest that veterans are more inclined to participate in public service and civic life — even after they leave military service — than the general population.

Once again, they are carrying the weight of our liberty on their shoulders.

Shortly after World War II, University of Chicago sociologist Samuel Stouffer launched an entire field of study dedicated to the effect of military service on attitudes and behavior in civilian life. Repeating those studies, which documented the activities of returning veterans after World War II, in the modern era would still be very helpful, not because of their impact on the health care system or the discovery of appropriate treatments for PTSD, but because veterans demonstrate qualities many of us don’t embody.

With a 21st century steeped in war, it couldn’t hurt to know more about the latest generation of veterans.

It’s likely that veterans’ participation in civic life, and especially in politics and elected office, will improve the country similarly to how the World War II generation’s involvement did. There are signs that it already is. But this is something we should know, rather than speculate about, the next time we see a homeless individual or thank vets for their service.

Americans don’t grasp just how much veterans do for America, both inside and outside the service, but an instinctive understanding of veterans’ contributions explains why public opinion holds them in higher regard than other entities that enjoy public trust (read: Congress and the media, to name a couple).

So even as veterans humbly engage in public service — after already stepping up to participate in the all-volunteer armed forces — we as Americans can try to learn from their example.

Read more about veterans’ service.

Why Wouldn’t the White House Promote Apprentices?

It seems obvious that the role of the apprentice is something President Trump appreciates so it’s a wonder why the question needs to be asked: why wouldn’t apprenticeships be a top priority in Washington?

Well, they were this week. In the midst of several news cycles that shed lots of light but little heat, you probably didn’t realize that this week was “workforce development week.” And lest you think this is reality TV, the administration made a big push on apprenticeships during cabinet meetings and talks with state leaders.

Indeed, the president this week called for 5 million new apprenticeships over the next five years, and in a rare case for Washington, he has some bipartisan backing to pursue the goal.

In the era of four-year liberal arts degrees, apprenticeships sound like something anachronistic, a leftover from the past, like colonial-era horse-shoeing and blacksmithing. In actuality, it’s a great opportunity for less-skilled workers, or workers with outdated skills, to get the training — and confidence — they need while getting paid to do the work.

Apprenticeships typically take the form of an employer and some type of education provider teaming up to offer hands-on training to prospective workers. Most apprenticeships are government certified. Importantly, apprenticeships are paid (unlike the typical internship), making them attractive to older workers who can’t go without an income and younger workers hoping to avoid borrowing for further education.

As labor economist Andy Smarick explains, apprenticeships are indeed paid, the question is by whom.

So what are possible downsides of apprenticeships? One is cost; there are nontrivial expenses associated with educating someone for a job. Obviously, employers will be wary in that investment if those trained end up taking those skills elsewhere. One question relates to education providers; who should deliver training — high schools, community colleges, unions, nonprofits, for-profits, employers? And assuming the government provides funding, how should providers be held accountable?

Of course, in Washington, success always comes down to money. And this case is no exception. The federal budget currently allocates $90 million a year to cover the cost of “regulating” apprenticeship programs. CNBC reported this week that the president doesn’t want to raise that budget by more than $5 million per year, leaving some news editorials to ponder whether the federal kitty has enough cash to keep the program purring.

Really, the challenge for Washington isn’t necessarily whether there’s enough money to grow the program, but whether it’s being used well. Politico reported that one senior administration official said, “The problem is not money … the problem is (training programs) haven’t been set up in an effective and accountable way.”

So what are the expenses being added up in Washington? They include partnerships between employers and higher education, which would mean dealing with accreditation and student aid. Or, the money could be spent on reorganizing existing federal workforce programs, which have been overlapping and wasteful.

Aside from the financial question, Smarick notes that one of the major concerns around apprenticeships is that young students coming out of high school and trained for a specific job eventually fall behind later in life because their skills “become outdated, the industry weakens, or the jobs get replaced.”

On the other hand, the U.S. Department of Labor found that “nine in 10 Americans who complete apprentice training land a job, and their average starting salary is $60,000 a year.” That’s certainly a step forward from the current situation, in which young people with high-school educations alone are roundly unprepared to enter the workforce, and generally end up in less-skilled, lower-wage jobs with less security.

Here’s a thought: No one is suggesting that high school educations be replaced with vo-tech, but if the U.S. were to go down the European model of tracking kids toward their skills aptitude, then perhaps industries as a whole could provide the ongoing job training that lawmakers so frequently laud, but rarely enable. Appropriate structuring of federal budget expenditures would not only provide enough money to fund apprenticeship accreditation, but could put programming on a path toward more accurately targeting workers for updated mid-life skills training.

Seems like that’s a program that would result in more workers hearing, “You’re hired!”

Read more from Smarick on President Trump and the basics of apprenticeships.

Deaths of Despair: Opioid Abuse Devastates America. There Is a Solution

Low-income poorly educated whites between the ages of 45-54 are dying too soon. Unlike every other age, ethnic and racial, education, and income group, this group’s longevity is decreasing. Why? Opioid abuse.

That’s right, prescription painkillers, heroin, Fentanyl, and other opiate derivatives killed more than 33,000 Americans in 2015. That’s about four times the number of opioid-related drug overdose deaths than in 1999.

Nearly half of those overdoses come from prescription painkillers. But the number of prescriptions written for opioids has been on the decline since 2011. That may explain the rise of heroin and Fentanyl, as substitutes for legal opioids. But the turn toward heroin and Fentanyl now exceeds painkillers for the number of deaths each year.

These drugs are extremely potent. Fentanyl, which was created to relieve pain in end-of-life cases, is about 50 times more potent than heroin, but people can survive it because they build up a tolerance.

The related costs associated with this national epidemic total about $77 billion.

That seems shockingly high, but consider some of the tentacles of the epidemic. The foster care system is overwhelmed. West Virginia, which has the highest overdose death rate in America, has run out of funding for funeral burial benefits. Ohio has started building portable morgues because coroners’ offices are full. The state of Arizona recently concluded that on average, more than two Arizonans die every day in 2016 due to opioid-related overdoses.

To put it bluntly, the United States has a killer problem on its hands.

Christopher Caldwell, a senior editor at The Weekly Standard and author of an essay entitled, “American Carnage: The New Landscape of Opioid Addiction,” recently spoke at a conference about the massive growth in opioid-related deaths. It’s a problem that began long ago.

The specific problem of opioids, I think, has to do with the confluence of three things in the 19th century: In the start of the 19th century, scientists were able to isolate morphine, the chemical in opium poppies. In the 1850s, we invented the hypodermic needle, and in the 1860s, we fought the bloodiest war in the history of the planet, and a lot of people came home with what we now call chronic pain, and the ability of, the uses of this drug were just infinite.

It was over-prescribed. You know what happened, or you can predict what happened. A lot of mothers and teachers, and like, pillars of the community, got addicted and died.

It wasn’t until soon before the first World War that the first drug laws were passed. Drugs became taboo, but after Vietnam, drug use started rising again, and with that, so did drug deaths. The use of crack in the 1980s began elevating the death rate. But the spike in recent years is a whole different animal.

So can something be done? Well, resources seem to be moving in the right direction, and in one of those rare good news stories, federal money is being directed toward actual solutions.

For instance, drug courts have expanded access to medication-assisted treatment (like methadone), and residential treatment programs, as opposed to jail, are helping addicts recover, not languish in prisons.

Harold Pollack, a professor in the School of Social Service Administration at the University of Chicago and a contributing researcher to the National Drug Abuse Treatment Systems Survey, which tracks drug addiction and substance abuse treatment programs nationwide, says there is also some movement among lawmakers “who are looking at a map of the nation, and seeing the problem is everywhere.”

“Antiquated behavioral health systems” are being given new life with federal funding. Ironically, the source of these solutions are being funded in part by one of the most controversial assistance programs out there – Medicaid.

“Medicaid is kind of the ball game on the service side. It’s so much more important than the (21st Century) Cures Act or anything else that people are going to talk about,” Pollack said.

Pollack said that as lawmakers figure out how to replace the Affordable Care Act, one of the issues that isn’t on the chopping block is mental health parity in health care, which includes addressing the symptoms that lead to drug addiction.

What’s striking is ACA-Medicaid expansion is kind of the quiet model for successful bipartisan health policy. Nobody really wants to talk about it, but that’s what is happening on the ground. When you call up someone in a random state … the conversation is about the work, it’s not about the politics.

And in fact, when we ask people you know there’s just been an election, does that change anything, the most common answer we here is, ‘We’ve been told from our governor just do the work, don’t pay attention to what’s happening in Washington, just keep doing. And I actually find that very encouraging. Democrats and Republicans around the country are governing and they’re really trying hard to deal with this because they see this map, and they don’t want people to die.

Pollack notes that Medicaid expansion has been good and bad, and when it comes to addressing the drug crisis, and policymakers “know less than we should about what’s happening out there.” Fortunately, he said, the problem is finally being taken seriously, though it’s unfortunate the conditions that had to arise before it did.

The crack epidemic, the HIV  work, when the drug problem was much more black and brown in its public conception than it is now, that’s a welcome change. I must say I feel a certain sense of sadness at seeing the difference in public reaction but it’s a good thing that people are responding with empathy and compassion.

Watch more about the opioid epidemic.

Farm Subsidies: Not Your Father’s Cropshares

Imagine this scenario: A massive disaster hits and America’s food sources are wiped out. Miles of crops no longer exist. The cost of food skyrockets. America’s farmers are devastated. The land is destroyed, the farm equipment turned to trash, homes and livelihoods are ruined.

The government mobilizes into action. How? By paying the farmers for their losses.

Could it happen? Well, weather disasters do occur, and drought and crop loss do impact farms. And lo and behold, the government does pay farmers for their losses. But is this assistance really necessary?

By one estimate, it would cost taxpayers about $6 billion a year to cover the losses from these disasters. More so, this level of loss doesn’t have any real impact on America’s food supply. A bigger impact from farm production and prices comes from government manipulation of the market.

So why are farmers receiving $23 billion in federal farm subsidies — government payouts — every year, more than a third of which is going to pay for crop insurance?

Farmers are vital to America and the world. America alone provides about 30 percent of the global corn supply each year and 8 percent of its soybeans.

But the image of the struggling American family farm is now more myth than fact. Data from 2014 show that the median wealth for farm operator households is $802,000, or roughly 10 times more than the median for U.S. households overall ($81,200). Farm families are six times wealthier than the average American family.

Conversely, 2 percent of farm households live below the poverty line. Compare that to the national average, where 43 million Americans, around 13.5 percent, live in poverty (if you don’t count government assistance!).

That’s not to say there aren’t a whole lot of family businesses operating farms. That’s merely to say that the vast majority of the nation’s crops comes from large farm productions.

So why is the taxpayer subsidizing farmers? For some perspective: Farmers in the top 1 percent income bracket on average collect $1.5 million in annual farm subsidy welfare checks. Seventy-nine percent of all farm subsidies are paid to the top 10 percent of the largest farm operations.

American taxpayers give farmers $8.5 billion a year to pay for insurance in case of crop losses. You think Obamacare is a a wealth redistribution program, consider this: the taxpayer effectively pays $2 to transfer $0.90 to the farmer and $1.10 to the insurance industry.

From Vincent Smith, an economics professor in the Department of Agricultural Economics and Economics at Montana State University:

The largest farm subsidy boondoggle through which the farm sector milks the federal taxpayer is the federal crop insurance program. Currently, under this program, taxpayers fund over 60 percent of all indemnities received by farmers. For every dollar the average farmer pays out in premiums, he or she gets back more than $2 in indemnity payments without making any contribution to the program’s administrative costs.

For farmers, crop insurance is an upside down Las Vegas gamble where the odds of winning are massively stacked in favor of the gambler, not the casino. The “casinos” in this case are the agricultural insurance companies, and they are not really losing any money because almost all crop insurance program losses are underwritten by taxpayers. …

There are no caps on subsidies in the federal crop insurance program; the bigger and richer the farm, the more lucrative the crop insurance program. Because the risks of crop revenue losses from poor crops or low crop prices are covered, farmers adopt more risky production and financial strategies. They win if the risky decisions pay off; the taxpayer foots the bill if they don’t. Farmers also have incentives to plant crops on lands that have poor soils, are environmentally fragile and that would never otherwise be used for crop production.

Smith notes that the current administration has proposed modest reforms to the 2018 farm bill, which comes up for renewal every five years. It calls for a 20 percent drop in the $23 billion spent every year on farm subsidies. Of that, about $2.8 billion would come from reducing insurance subsidies.

The Trump administration wants a cap of $40,000 per individual farm for government subsidies used to buy crop insurance premiums. These cuts would only affect farms with market sales for crops in excess of $750,000.

Currently the government pays an average of 62% of all premiums for crop insurance coverage, with no limits on how big an individual farm subsidy can be. In 2011, according to the nonpartisan Government Accountability Office, more than 20 farms received over $1 million in such subsidies, and most crop insurance subsidies flowed to very large corporate farms. The White House’s Office of Management and Budget estimates that the $40,000 cap would reduce annual government spending by $1.7 billion, or about 15% of current crop-insurance subsidies.

According to Smith, the reduction in subsidies to farming operations would be about 1 percent of farm revenue. At $400 billion in annual revenues on top of $23 billion in government largesse, that sum barely impacts the market.

Agriculture is only 1 percent of the overall U.S. economy, and it does involve risk, but does that explanation provide a justification to access the taxpayers’ pocketbook? And if there’s one place to start cutting, shouldn’t it be on payouts to a wealthy class of business owners?

How Work Requirement in Food Stamp Program Helped Reduce Poverty in Maine

TPOH has long advocated maintaining a safety net for those truly in need, but also supporting work as a means to build value in one’s lives and in the lives of others. Work provides meaning and purpose, despite those who wish to argue otherwise.

So it’s refreshing to read a strong rebuttal to a shocking claim that suggests proposed changes to the food stamp program will force people to hunt squirrels for food. Turns out such hyperbole doesn’t stand up to the evidence.

The Washington Post reported in a story last week that a Navy veteran was forced to catch, skin, and eat squirrels cooked on a flame nearby the tent where he lived in Augusta, Maine after the state tightened its work requirements for recipients of the social safety net. The newspaper than suggested that President Trump’s federal budget proposal mimics the Maine plan, and could jeopardize poor people.

But political commentator Marc Thiessen, a former speech writer for President George W. Bush, cleared up the Post’s misconceptions.

First of all, under federal law, work requirements only apply to able-bodied adults without dependents (ABAWDs).  So if a person is truly disabled, he or she would not be subject to work requirements.

Second, the work requirements are not all that stringent. Able-bodied adults can received three months of food stamp (Supplemental Nutrition Assistance Program – SNAP) benefits in a 36-month period, after which they have three options for fulfilling their work requirement:

1. Work a paying job for at least 20 hours per week

2. Participate in a federal or state vocational training program for at least 20 hours per week.

3. Perform 6 hours of community service per week.

This means that in order to be forced to hunt squirrels for food, you’d have to refuse not only to work, but also to participate in work training, or to volunteer for the equivalent of just one hour per day. If you are able enough to hunt and skin squirrels, you’re probably able enough to meet those minimal requirements.

Thiessen then explained that the state helps those who are bound to the work requirement with resumé building, job interview training, support coaching, and even providing volunteer opportunities.

As a result, Maine’s food stamp roles plummeted by 86 percent while its able-bodied adults experienced an average 114 percent increase in income!

Forbes magazine reported that people who relied on the program saw their average benefits drop 13 percent because they ended up needing less assistance.  The work requirement ended up reducing the cost of the food stamp program by $30-$40 million annually.

As Thiessen explains:

In other words, work requirements in Maine have been a huge success.  Far from hunting varmints, most people have found work. And – here’s the important part – work is what most people on food stamps really want. …

Thiessen explained that a similar case occurred in New York City under Mayor Michael Bloomberg, and it was reported that while people in the program expressed that the EBT card is nice, they preferred a job. Implementing the work requirement took New York City from having one of the nation’s highest poverty rates to one of the lowest.

To claim that work requirements are somehow cruel is to deny individuals the opportunity to achieve something self-made, an outcome that satisfies an internal need for fulfillment, not just a need for a full belly.

Some oppose work requirements because they see them as a way to punish welfare recipients or deny them benefits. But work is not a punishment. Work is a blessing. And work requirements are a critical tool to help rescue our fellow Americans from the misery of idleness – so they can achieve meaning and happiness in their lives through the power of honest, productive work.

Which Pays Better Wages? Government or Private Sector

The Congressional Budget Office, the federal government’s numbers cruncher, recently completed an analysis comparing salaries and benefits received by employees of federal and large private-sector employers, and concluded that all things being equal, the federal government pays better wages than the private sector.

On average, the federal government’s compensation package pays a 17 percent premium over the private sector.

The analysis, called “highly professional” and “state-of-the art” by former Social Security Administration Deputy Commissioner Andrew Biggs, is an attempt to do an apples-to-apples comparison by taking into account levels of education and experience.

All-in compensation per full-time equivalent federal employee in 2015 was about $123,000. Assuming a 17 percent federal pay premium, this implies that on average a similar private-sector employee would receive total pay and benefits of about $105,000, an annual difference of about $18,000.  …

When averaged over 2.1 million federal employees, the federal compensation premium adds up to real money. Total federal compensation last year was close to $260 billion. A 17 percent difference is about $38 billion per year, equal to what the federal government spends on energy and the environment and substantially exceeding federal spending on transportation.

The CBO report found that 91 percent of federal employees have an education ranging from high school graduate to master’s degree, and that these employees make more than those of equivalent educations at similar jobs in the private sector. The report found, however, that the 9 percent of the federal workforce that have doctoral level degrees make 18 percent less than those with equivalent degrees in the private sector.

Biggs says that the difference in the type of grades, alma maters, and fields of study have not been measured so there’s no way to know whether federal workers are more “middle of the road” students from average colleges compared to those Ivy Leaguers with top grades. He suggests that this lack of information may be where the weakness in the report lies and it could be a notable variable since “most private-sector employers could not attract and retain employees while paying 18 percent less than their competitors.”

Doubling back, however, Biggs then says that the federal pay premium could be hurting innovation because workers who choose to make more money in government than work in the private sector are squandering their potential creative energies.

As the CBO report shows, for less-educated workers federal pay is more than 50 percent higher than private-sector levels. This makes it almost impossible for an employer of less-educated workers to compete and, as a result, the best of that group — employees with the greatest drive, imagination, and leadership — may find themselves employed in government rather than the private sector, where they might make a larger impact on their communities. …

There are many highly-educated, highly skilled, highly-motivated Americans working for the federal government doing important jobs. But we shouldn’t miss the risk that generous federal pay could mean the founders of the next Google or Tesla find themselves working in a federal office building instead of creating the innovations that can change the world.

But perhaps the well-paid average government worker of a decent education isn’t missing his calling. A recently released study that tracked 81 high school valedictorians through their careers found that the best and the brightest often end up in great jobs but ones that lack creativity. The suggestion is that the early track toward professional success pushes these highly motivated students to avoid risk-taking. They do not pursue eminence in one particular field nor devote themselves to a single passion.

“They obey rules, work hard, and like learning, but they’re not the mold breakers. … They work best within the system and aren’t likely to change it.”

In other words, dropouts like Bill Gates, Steve Jobs, and Mark Zuckerberg are unlikely to be interested in government careers in the first place.

Ultimately, the federal government’s high pay does have side effects. It skews the pay scale and impacts the labor market, making it harder for companies to compete for bright employees. However, if the goal is to populate the federal government with good-quality workers, financial benefits are a solid offer to attract them.

Rebuilding America: An Investment in Social Capital

With the advent of modern transportation, community certainly extends beyond the boundaries of one’s home, so it shouldn’t be a great surprise that the percentage of adults who say they spend a social evening with a neighbor at least several times a week fell to 19 percent in 2016 from 30 percent in 1974.

No longer is this country based on loving they neighbor, but perhaps neighborliness is a lost art in need of a renaissance.

That’s the gist of a new report just released by the Joint Economic Committee on Capitol Hill. “What We Do Together: The State of Associational Life in America,” is part of the Social Capital Project, run by Sen. Mike Lee of Utah.

Its stated purpose?

The Social Capital Project is a multi-year research effort that will investigate the evolving nature, quality, and importance of our associational life. ‘Associational life’ is our shorthand for the web of social relationships through which we pursue joint endeavors—namely, our families, our communities, our workplaces, and our religious congregations. These institutions are critical to forming our character and capacities, providing us with meaning and purpose, and for addressing the many challenges we face.

The goal of the project is to better understand why the health of our associational life feels so compromised, what consequences have followed from changes in the middle social layers of our society, why some communities have more robust civil society than others, and what can be done — or can stop being done — to improve the health of our social capital. Through a series of reports and hearings, it will study the state of the relationships that weave together the social fabric enabling our country — our laws, our institutions, our markets, and our democracy — to function so well in the first place.

The first report from the project is a bit dispiriting. While Americans are much more materially better off, the social fabric is frayed, fractured, and seemingly coming apart. At risk is pretty much the social norms that allow a middle class and the sustainability of a “free, prosperous, democratic, and pluralistic country.”

Some of the findings in the report reveal that social capital is dropping because Americans are spending less time socializing with neighbors, declining to vote, and lacking in trust of fellow Americans (from 46 percent in 1972, the report to 31 percent in 2016, according to the General Social Survey).

Political columnist Ramesh Ponnuru points out some exceptions raised in the report.

Rates of volunteering have increased. Some kinds of political engagement have also risen: The percentage of the population that reports having tried to influence someone else’s vote has gone up over the last few decades. The overall story, though, is one of fewer and weaker interpersonal connections among Americans.

Social scientists Charles Murray, who testified to the Joint Economic Committee this week, described the impact of a decline in social capital: fewer people are getting married and fewer men are working. He said that the government can try to find policies to encourage behavioral changes, but the declines are symptoms of a larger, more visceral problem.

If I had to pick one theme … it is the many ways in which people (behave) impulsively — throwing away real opportunities — and unrealistically — possessing great ambitions but oblivious to the steps required to get from point A to point B to point C to point D in life.

In other words, the desire for instant gratification has its consequences. Another problem he cited is a general self-destruction created by the squandering of an ample number of opportunities to get ahead.

The solution?

It comes down to the age-old problem of getting people, especially young people, not to do things that are attractive in the short term but disastrous in the long term and, conversely, to do things that aren’t fun right now but that will open up rewards later in life. The problem is not confined to any socioeconomic class. The mental disorder known as adolescence afflicts rich and poor alike. And adolescence can extend a long time after people have left their teens. The most common way that the fortunate among us manage to get our priorities straight — or at least not irretrievably screw them up — is by being cocooned in the institutions that are the primary resources for generating social capital: a family consisting of married parents and active membership in a faith tradition.

I didn’t choose my phrasing lightly. I am not implying that single parents are incapable of filling this function — millions of them are striving heroically to do so — nor that children cannot grow up successfully if they don’t go to church. With regard to families, I am making an empirical statement: As a matter of statistical tendencies, biological children of married parents do much better on a wide variety of important life outcomes than children growing up in any other family structure, even after controlling for income, parental education, and ethnicity. With regard to religion, I am making an assertion about a resource that can lead people, adolescents and adults alike, to do the right thing even when the enticements to do the wrong thing are strong: a belief that God commands them to do the right thing. I am also invoking religion as a community of faith … For its active members, a church is far more than a place that they to worship once a week. It is a form of community that socializes the children growing up in it in all sorts of informal ways, just as a family socializes children.

Murray said his ideas are not meant to generate policy recommendations, but more a warning.

I would argue that it is not a matter of ideology but empiricism to conclude that unless the traditional family and traditional communities of faith make a comeback, the declines in social capital that are already causing so much deterioration in our civic culture will continue and the problems will worsen. The solutions are unlikely to be political but cultural. We need a cultural Great Awakening akin to past religious Great Awakenings.

Will the social capital project be able to trigger a “Great Awakening”? Perhaps not, but a disconnect in society will most certainly cause bigger problems that will ultimately cause a larger breakdown that will rely on homegrown gumption to fix.

As Ponnuru explains, a return to the aspirational nature of social capital may require a “rediscovery of Tocqueville.”

Sentiments and ideas renew themselves, the heart is enlarged, and the human mind is developed only by the reciprocal action of men upon one another. … In order that men remain civilized or become so, the art of associating must be developed and perfected among them.”

Everybody Lies: Except in a Google Search

Don’t bother answering questions by the next pollster who calls to do a survey. You’re probably going to lie to him. Because “everybody lies.” And there’s no point in taking a survey if you’re going to lie. Besides, Google’s already got you on the truth meter.

That’s one of the main discussion points in the new book, “Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.”

The blurb on the book says, “By the end of an average day in the early 21st century, human beings searching the Internet will amass 8 trillion gigabytes (GB) of data.” Every day, 8 trillion GB. What does that even amount to? Who knows, but it’s a lot. The average computer has about 4 GB of memory. A flash memory card in a camera may store 16 GB. We’re talking 8 trillion GB – daily.

So what are people searching? Pretty much everything, according to “Everybody Lies” author Seth Stephens-Davidowitz.  And the data these searches reveal can be one useful tool for putting the human psyche under the microscope.

“People are honest on Google. They tell Google what they might not tell to anybody else. They’ll confess things to Google that they wouldn’t tell friends, family members, surveyors, or even themselves,” Stephens-Davidowitz said Tuesday in remarks about his book.

Take, for instance, some of the common confessional-style searches that Google gets: “I hate my boss,” “I’m happy,” “I’m sad,” or even “I’m drunk.”

Some of the searches can become rather morose and depressing. For instance, after the San Bernardino attack in 2015, in which 14 people were killed and another 22 seriously injured, top Google searches that soon followed included “Muslim terrorists” and “kill Muslims.” Stephens-Davidowitz says certainly it lacks context to try to guess what people were trying to express in the search, but it also provides guidance.

Here’s one way the data were used. Shortly after the attack, President Obama delivered a speech to try and calm fears about Muslims in America. But his grandiose sermonizing about opening America’s hearts backfired. Even during the speech, people got angrier. But at one point, Obama said that we have to remember that Muslim-Americans are our friends and neighbors, they are sports heroes, and members of the military who are willing to die to defend this country.

Immediately, while the speech was still being given, Google searches for “Muslim athletes” spiked. The increase was so notable that when Obama gave a speech a couple weeks later on the same topic, he skipped the lecturing and focused on the contributions of Muslim-Americans.

Stephens-Davidowitz argues that while Obama’s sermon didn’t tell anybody anything that they didn’t know, the line about sports heroes provoked curiosity, provided potentially new information, and redirected attention. This may not indicate that there’s a science to calming fears after a terror attack, but it does show the power of the data to change how people act and react.

Stephens-Davidowitz says part of the reason why data searches are more useful than old-fashioned survey questions is because people tend to lie in surveys to make them look good. It’s called social desirability bias. It happened during the elections of 2008.

During that time, most Americans surveyed said Obama’s being black didn’t matter. Yet during the election, there was a spike in racist term searches. And graphing that data revealed that racist term searches were geographically divided between East and West. While correlation is not causation, where the racist term searches spiked, Obama lost about 4 percentage points of the vote over the previous Democratic candidate (John Kerry) in Democratic strongholds. He also generated a 1-2 percentage point increase in the number of African-Americans who voted.

Map of Google searches of racist content

The book, “Everybody Lies,” isn’t entirely about politics. It talks about a variety of topics like the stock market, crime, sports, and of course, sex, a hugely commercial enterprise on the Internet. In one example about the truth of big data, Stephens-Davidowitz notes that American women said in recent polling that they had sex (hetero and homosexual sex) once a week and used condoms about 20 percent of the time. Extrapolating the numbers, that would mean about 1.6 billion condoms were used that year. But asking men the same question (about hetero and homosexual sex) resulted in just 1.1 billion condoms allegedly used that year.

So who’s telling the truth, men or women? Neither. According to sales reports, just 600 million condoms were sold during the year in question.

Stephens-Davidowitz conjectures that people have an incentive to tell the truth to Google in a search, more so than to a pollster asking a survey, because they need information. For instance, an increase in the search volume for voting places in an area in the weeks leading up to an election is more likely to reveal whether turnout is going to be high in that location than whether a pollster finds that 80 percent of the people say they will vote.

But is Internet search a digital truth serum? Is it the best way to get real answers? Yes and no.

It depends on how available other high quality data are. For instance, Google flu, which attempted to determine how sick the population was during flu season based on searches about symptoms, was not as accurate as flu modeling currently used by government agencies like the Centers for Disease Control and Prevention.

Furthermore, what people search doesn’t explain why people search. Likewise, Google doesn’t identify who’s searching so we don’t know if the search is a representative sample of the population. There’s no way of knowing what an absolute level of response would generate. For that, we need lots of different types of data.

But Internet searches may be useful in measuring the human psyche more so than in predicting futures. Big data can be helpful in looking at information that does not require very precise numbers. Predicting an election within 5 percentage points isn’t helpful. But it probably is not a big deal to be off by 10 percent when counting the number of condoms used in a year.

As for topics like child abuse, Stephens-Davidowitz says that he’s not actually sure how to use the data to help governments and protective agencies develop programs to identify and address abuse, but that it’s certainly information that would be helpful in filling a gap in reporting. And like any pollster worth his salt will tell you, being able to ask the right question is one vital way of getting to an accurate answer.

Watch the remarks by Stephens-Davidowitz.

Sports Industry: The Economic Spillover of LeBron James

America loves its sports teams. There’s nothing like a cross-division rivalry to get people worked up and trash talking. Teams bring a great deal of pride to cities, and that’s why the years never blunt the hurt of a team’s move, whether it be the 1984 bolt by the Colts from Baltimore to Indianapolis or the 2020 planned move of the Raiders from Oakland to Las Vegas.

Cross-country movements of teams remain psychologically, if not economically, important to cities.

But individual athletes also have their impacts on a town. Some of it is cultural or behavioral. Star athletes can be propped up as hometown heroes, or if they misbehave, they can be shamed out of a city.

Take LeBron James, for instance. Practically a household name, James won the NBA’s MVP award four times. He won three NBA championships, and was part of two victorious US teams at the Olympics. He is a showman and a hard charger, and is welcome wherever he wants to play.

James’ move from his hometown Cleveland Cavaliers to the Miami Heat in 2010 felt like betrayal to the locals who watched him rise from a Northeastern Ohio upstart to an international superstar. Likewise, his move back to Cleveland in 2014 was treated like the prodigal son had returned home.

Now, a recent economic study concludes that James’ influence goes beyond pride. His popularity makes him a draw, but his presence has a significant economic impact on the communities where he plays.

We find that Mr. James has a statistically and economically significant positive effect on both the number of restaurants and other eating and drinking establishments near the stadium where he is based, and on aggregate employment at those establishments. Specifically, his presence increases the number of such establishments within one mile of the stadium by about 13 percent, and employment by about 23.5 percent. These effects are very local, in that they decay rapidly as one moves farther from the stadium.

Mapping out concentric circles to measure James’ impact around the sports facilities where he played, the study’s authors measured growth using employment and establishment data from Harvard’s Center for Geographic Analysis. They crunched the numbers to calculate the increase in food and beverage establishments and the number of employees in these industries within 10 miles of the Cleveland and Miami basketball stadiums.

The economists then ran a couple regression analyses and found that James’ presence increased the number of restaurants up to about seven miles.

The data show a downward trend in the number of restaurants in Cleveland between 2010 and 2014 that coincides with an upward trend in Miami. After Mr. James returned to the Cavaliers, the number of restaurants near the Quicken Loans Arena in Cleveland spiked, while the number of restaurants within a mile of the American Airlines Arena started to slide.

They also found a positive correlation between the number of regular-season wins won by the Cavaliers and the Heat and the number of restaurants located within one mile of the corresponding stadium.

But when they separated out the cities using a different formula, they found that James’ impact was greater in Cleveland than in Miami. So what can they conclude?

Two potential explanations come to mind. Perhaps Mr. James is particularly beloved in his native Ohio. Or maybe ‘superstar amenities’ are substitutes, not complements, and Miami has plenty of them even without Mr. James, generating fiercer competition and an attenuated impact of any specific superstar.

In other words, a town that has more to offer to its residents and visitors, an advantage Miami has over Cleveland, may feel less impact from the arrival or departure of a superstar athlete.

Whether or not you can draw a conclusion from this standalone study, it’s fun to consider. And more importantly, it suggests that it wouldn’t hurt to take care of our neighbors who make good. Their success reverberates like a stone skipping across the water.

Upward Mobility: New Routes in the Race For America’s Fastest Growing Cities

Wake up, America. We have a mobility problem. And we’re not talking about former First Lady Michelle Obama’s “Let’s Move!” campaign or the number of potholes on the highways to America’s fastest growing cities.

Yes, infrastructure maintenance and improving the physical health of children are important. But for the kids who sit in front of electronics for the better part of a day, as long as they live in the American northeast, much of the Midwest and a good portion of the West, they are poised for a better life than their parents.

For children who live in Appalachia and the “Rust Belt,” on the other hand, the cards are stacked against them — even if they never lay eyes on a digital screen, always eat healthy school lunches, are physically active. That’s because their future employment opportunities are dwindling, as is their ability or likelihood to pick up and move down the road to another city with a more promising employment future.

For these kids, “Let’s move” isn’t that simple. Why? Because America’s post-recession recovery is more one-sided than we would like.

A recent Economic Innovation Group (EIG) report shows that while the United States has been recovering economically as a whole, the individual areas where new businesses have popped up – and employed people – are very limited.

According to the report, 20 counties alone generated half of the country’s new business establishments.

Most children in the United States are growing up today in counties with a poor record of fostering upward mobility. As the geography of U.S. economic growth narrows, it may become even harder to prevent further retreat of economic mobility.”

How do we change this? How do we spread out new business ventures and incentivize entrepreneurs to start and grow successful companies in areas that are currently economically depressed? How do we tighten the gap that only seems to be growing as economically vibrant cities get stronger and blighted cities become more depressed?

A recent piece by AEI’s James Pethokoukis on “left behind” America cites some creative ideas, including relocating large federal departments that don’t need to be inside Washington, D.C. (which usually enjoys low unemployment) into cities that do need help building economic infrastructure.

Pethokoukis also explores “Universal Basic Service,” an idea that would focus on helping to build communities in areas where demand is high, but supply is low. He cites economist Diane Coyle, who says,

If teachers or nurses do not want to move to Detroit and West Virginia … then there should be a pay premium large enough to overcome their reluctance. And the quality of service in local transport networks should be as good in declining as in wealthy areas.”

A third proposal uses tax breaks as incentives to encourage private investment – a route most strongly favored by the EIG itself. Yeah, that sounds boring, but the implementation is a whole rethinking of how America is structured today, and looks at removing regulatory hurdles to create specialized regions like Silicon Valley for technology and Raleigh-Durham for biotech research. Quoting venture capitalist Marc Andreessen,

Imagine a Bitcoin Valley, for instance, where some country fully legalizes cryptocurrencies for all financial functions. Or a Drone Valley, where a particular region removes all legal barriers to flying unmanned aerial vehicles locally. A Driverless Car Valley in a city that allows experimentation with different autonomous car designs, redesigned roadways and safety laws. A Stem Cell Valley. And so on.”

These three very big ideas would likely take quite a bit of political maneuvering for the legislation to begin the restructuring, let alone passage of laws to begin implementation, but there are other ways, smaller ways, to help people in distressed areas seek employment and help propel themselves toward upward mobility.

Pethokoukis colleague Michael Strain suggested multiple proposals to address relocation, disability, minimum wage, immigration, entrepreneurial endeavors, and more.

Ultimately, if we want America – all of America – to enjoy the benefits of our economic recovery, we need to make changes that make it possible for all citizens to earn their success with hard work.

“Let’s move” can have a whole new, broader meaning when we consider how we can offer a hand up to those who want to climb and make a better life for themselves and their families.

Getting Men Back to Work: A Little Public Shaming Doesn’t Hurt

Over the past year, three notable policy experts from across the political spectrum have documented the crisis of prime-working-age men dropping out of the labor force. And they have come up with some interesting ideas for getting men back to work.

In his 2016 book “Men Without Work,” demographer Nicholas Eberstadt found that more than 7 million men between the ages of 25 and 54 were not working or looking for work. President Obama’s Council of Economic Advisers corroborated Eberstadt’s findings on the extent of the problem, concluding that the labor force participation rate for prime-aged men fell from 96 percent to 88 percent from 1967 to 2016. For those with only a high school diploma, the labor force participation rate now stands at 83 percent.

And it’s not because these men have something better to do.

These men are generally not engaged in other productive activities like education or child-rearing. Instead, they mostly spend time in leisure activities, though not happily. Opioid use and dependency among them is rampant, and their mortality rates — what Anne Case and Angus Deaton of Princeton University call ‘deaths of despair’ — are rising.

In other words, this trend is a disaster. Getting men back to work is not only critical for them, but for their communities and the U.S. economy.

In a new paper released last week, former New York City Human Resources Administration Commissioner, Robert Doar; former Clinton Labor Department chief economist Harry Holzer; and family and economic stability expert Brent Orrell outlined a range of policy responses that leaders of both political parties in Washington would be smart to consider.

Interestingly, in their long road map for reforms, the authors cite a little public shaming, as well as a lively discussion of the dignity of hard work (as expertly expressed by Mike Rowe) as potential motivators for getting men off the sidelines and back into the workforce.

In other words, to sit around and do nothing deserves a little scolding,  they recently wrote in The Hill newspaper:

 We endorse a strong public commitment to work by our political, social and community leaders. A little stigma about non-work when work is available is a good thing. Every adult should be engaged in a productive activity in the job market, at school or at home.

They add in their research paper,

Of course, there is no obvious policy lever that can change social norms regarding work. However, that does not mean political leaders are powerless to affect change. Much has been made of our present ‘populist’ political moment, in which leaders have promised to stand up for the men and women forgotten by the distant political elites and victimized by trade, immigration, and elite corruption. These messages should come with an addendum: Just as elected officials have an obligation to defend the interests of Americans, able-bodied Americans must take some responsibility for improving their economic situations by working and pursuing the opportunities that are available.

Aside from calling men out, the authors acknowledge that they disagree among themselves about some of their own plan’s proposed reforms, but they would rather pick all than none. In other words, the political fights in Washington that devolve into each team rooting for its own side is damaging the playing field as much as the players. And for that, elected officials could use a little scolding of their own too.

The situation is dire enough that everything on the table needs to be implemented, particularly since the psychological impact on men not working has such far-reaching consequences, including a decline in marriage and intact families, increased drug use, and recidivism by ex-criminal offenders.

Their proposals include work-focused reforms to government benefit programs such as food stamps and Medicaid, increased tax credits, and more generous wage subsidies. Other ideas include training programs in community colleges and apprenticeships in sectors like health care, advanced manufacturing, and information technology, where labor demand is high.

For others who are hard to employ or residing in depressed communities where few jobs exist, we need to create more jobs through subsidies to employers. Recent evidence shows that we can do so quickly and in large numbers, thereby raising employment among the disadvantaged and creating indirect positive effects, like lower crime.

With the baby boomer generation heading into retirement, the need for more men to work is only going to get worse, especially in male-dominated sectors like construction, which could experience an even higher demand for labor if President Trump and Congress follow through on an infrastructure package.

So shame on the men who just sit around rather than diligently toiling away. There’s plenty of work to be had if men pick themselves up and go out and get it.

Numbers Don’t Lie: How Paid Parental Leave Helps the Economy

“Pawternity leave” is on the rise around the world. At the very least, several companies in the UK and India now provide “paid parental leave” for pets! That’s paid time off to employees when a new pet becomes part of the family.

“Pets are like babies nowadays,” according to the owner of a UK tech company. “So why shouldn’t staff have some time off when they arrive?”

Which begs the question: If puppy parents are reaping the benefits of paid parental leave, can’t the United States provide comparable benefits to ensure human babies receive the same support?

Perhaps it’s a bit more complicated than an optional company program to let people stay home to house-train the dog. And it’s not that Americans don’t care about new parents or babies. (A Pew Research survey from last month reveals Americans overwhelmingly support paid leave access following the birth or adoption of a new child.) Many Americans just disagree about who should foot the bill – private employers or taxpayers.

Well, how about both? Many in the private sector already enjoy weeks or months of paid leave as part of their company’s benefit package and they have no need for government assistance.  And those company policies can stay right where they are.

But many companies don’t offer assistance to new parents, especially when its too expensive to sustain an employee who is not actively contributing to the company’s bottom line for weeks at a time.

A new paper studying the paid leave system shows that parent workers who are least likely to be offered paid parental leave as a company benefit are the ones who need it most — they are at the lower-end of the economic ladder, and the ones whose families are least able to sustain themselves during a pause in earnings. The end result is that lower-income parents who choose (or need) to stay home with a new child often are forced to quit their jobs or are given no guarantee their job will be there when they come back. And they are least likely to have resources to support themselves during the break from work.

First-time parents, particularly low-income mothers, often find no other option than to quit their jobs to care for newborn children, even though work is their most effective path to self-sufficiency.”

That becomes a larger problem for the economy as a whole, according to the study’s co-authors Angela Rachidi and Ben Gitis. Their solution?

A better approach is to offer a modest, well-targeted government paid parental leave program to supplement what is already provided in the private market. …

An income-tested paid parental leave program would effectively target these workers so they can raise healthy children and remain attached to the labor force.”

Their program would include:

… a reasonable benefit ($300-$500 per week, depending on family size and income) to low- and lower-middle-income households. We suggest phasing it in and out in a way that targets those who are the least likely to already have it, as well as the least likely to handle an income loss from time away from work.”

What does that translate into on a national scale? An estimated $4.3 billion per year by taking into account the number of families who would qualify — 2 million qualified workers who add a child to their family each year.

Rachidi and Gitis recognize that many are hesitant to support new government programs, since they often result in higher taxes or potentially an increase in national debt.

But it turns out, there is a financial upside to providing paid parental leave.

Research shows that paid parental leave has positive effects on employment and job continuity, both of which support economic growth.”

More so, the overall cost of not providing paid parental leave could be substantially higher — and not just in short-term annualized dollar output. Returning to work too soon has been shown to negatively affect children, especially those from disadvantaged backgrounds.

Rachidi and Gitis look at other proposed models on the table and find they are much more expensive. They also note the potential risk that employers may stop providing paid leave benefits to workers if there’s a federal program, but their study attempts to mitigate that risk by limiting the number of workers who would be eligible for the program to those who don’t already receive benefits.

Ultimately, self-reliance is important at every stage of life. Sustaining employment – especially as the number of dependents increase inside American households – is as critical to the stability of the family as it is to the government providing benefits to unemployed parents.  It is in the interest of the country at the family level and the national economic level to assist vulnerable parents who need to retain income during the early weeks of a new child’s life, and to keep their jobs in the long-term.

‘Choice Feminism’: Equal Opportunity and Gender Specialization

Picture this: Dad heads out to work in the morning. Mom stays home to care for the kids and maybe works part-time while they are in school. While Mom is home, she cooks, cleans, and runs the domestic sphere of the family while Dad earns the money needed to pay the bills. And everyone is happy.

Gasp! That sounds like the 1950s! Except it’s not. It’s 2017. And in this scenario, “Everyone is happy.”

“Feminism” is a word that has been loaded with undertones and assumptions for decades. And while critics may have a legitimate bone to pick with some of the social, cultural, and political issues that were born out of the 1960s feminist movement, don’t assume that the “f” word automatically refers to the man-hating, bra-burning ideology.

In fact, if we stopped to look at how millennial women — and men — now increasingly prefer traditional, female stay-at-home roles and male bread-winning roles, we might consider the principles of a certain kind of feminism that explains this recent shift.

It’s called “choice feminism,” and it is a term that has been adopted to describe the belief that women are free to choose the lifestyle they want, whether at home or in the workplace, without judgment. That work may be as the homemaker or as the breadwinner, or as a worker whose responsibilities are part-time in both of those environments.

The key is that women get to make the decision whether they stay in or work out of the home. And it’s a natural fit for the newest generation of parents.

In a recent analysis, researchers Samuel Sturgeon and W. Bradford Wilcox explore why the enthusiasm for choice feminism has increased among the millennial set. Citing a new report by sociologists Joanna Pepin and David Cotter, they write:

The increasing popularity of intensive mothering in the 1990s, frustrations over the stresses associated with balancing work and family, and a media and pop culture backlash to feminism in the 1990s — think of the ‘you can’t have it all’ meme from the era — made 1970s-style feminism, with its insistence on moms combining full-time work and family life, less appealing to a growing minority of young adults.

Translation: millennials, who as children in the 1990s watched the backlash by women trying to be on 100 percent of the time at work and 100 percent of the time at home, think the early feminist rat race is an exhausting and undesirable way to live.

Rather than embrace a ’70s-style feminism where everything is supposed to be split 50-50 in the home, a growing share of young adults embrace an ethic closer to matching two-parent families as they really are in 21st century America: That is, millennials may take a more favorable view of gender specialization in the family because it remains quite common in their own experience and, in an era of choice feminism, less problematic.

Just as this helps explain, at least in part, why preferences in gender roles have morphed since baby boomers and Generation Xers were young, Sturgeon and Wilcox also propose what choice feminism now provides to women: equality and gender specialization.

Choice feminism allowed women to invest heavily in their children, juggle work and family responsibilities, and maintain a sense of feminist self-respect. It stands to reason that, in the spirit of this choice feminism, many young adults support an ethic of equal opportunity for women in the public sphere even as they embrace an ethic of gender specialization in the private sphere.”

The authors also note that cultural and racial shifts in demographics may have contributed to changing beliefs in the division of labor. Today, 22 percent of young adults in the U.S. are Hispanic compared to only 7 percent in 1980.

That matters, because young Hispanics (especially young Hispanic men, who prefer traditional family arrangements at higher percentages than Hispanic women) are more likely to embrace a traditional division of family and work responsibilities than other young adults.”

But since they argue that demographics are only a portion of explaining today’s millennial views, perhaps those who gasp at traditional family structures should consider the power of a woman’s choice at home and in the workforce.

Simultaneously, conservatives shouldn’t hyperventilate at the notion of supporting feminism – at least the kind that enables women to reach their full potential at work and/or at home because their pursuit of both wasn’t foisted on them. These women have made their choices, and are pursuing their happiness.

The Dignity of Work — A UK Model for the US

There’s diamonds in the sidewalk, the gutters lined in song.
Dear, I hear that beer flows through the faucets all night long.
There’s treasure for the taking, for any hard working man,
Who’ll make his home in the American land.

Leave it to Bruce Springsteen to celebrate the value and dignity of work in one of his most patriotic songs, “American Land.” It’s not surprising that he is appreciated as one of America’s greatest musicians by people from all walks of life, from poor to rich and old to young.

One of the reasons his popularity has spanned decades is his ability to tap into the belief that “pulling yourself up by your bootstraps” is a quintessentially American attribute. Pursuing happiness is American in nature. And the ability to achieve the American dream through hard-earned work is also American in nature.

But what happens to this foundational belief when the “hard-working man” begins to disappear from the picture? What happens when the “treasure for the taking” is actually easier to acquire via government handout than through blood, sweat, and tears?

While millions – perhaps billions – have memorized lyrics to songs composed by The Boss, a non-American leader reminds us that our value of hard work is the only way to success. Iain Duncan Smith recently spoke about the United States’ problem with labor, welfare, and the culture of dependence and demonstrated how the trend toward dependency is preventing so many Americans from pursuing their dreams.

Smith is a British Member of Parliament whose civil service has included posts such as leader of the Conservative Party and founder and chairman of the Center for Social Justice (CSJ), “an independent think tank committed to tackling poverty and social breakdown.” Under a program he established at the CSJ, he defined the “five pathways to poverty” — educational failure, addiction, serious personal debt, worklessness and dependency, and family breakdown. He then went on to craft a plan to reduce them.

A coalition led by Smith came to power in the United Kingdom in 2010. At the time, the UK was suffering a similar problem as the United States — a decline in labor participation and a growing dependence on government largesse. Nearly 20 percent of UK households had no member in the workforce, and 1.4 million people (in a nation with 52 million individuals age 15 and over) had been on public assistance “for most of the previous decade.”

Smith and his allies implemented a series of reforms that built off the ideas proposed by the CSJ. Today, just seven years later, the United Kingdom enjoys its lowest unemployment rate since 1975, “the highest proportion of people in work since records began,” and historic lows of people unemployed and not seeking work. It’s labor participation rate stands at 75 percent.

Smith recently delivered remarks at the American Enterprise Institute to address the social reform that American leaders, like their British brethren, can implement to help reverse the unintended damage caused by welfare programs. These U.S. programs — now standing at 126 in total, 71 of which provide a cash or in-kind benefit — were created with the best of intentions. But they have ended up trapping people into indefinite dependency instead of providing a temporary safety net for people while they get back on their feet to earn their own success.

As Duncan explains, the “temporary” nature of welfare makes not just financial sense, but human sense.

Work is about more than just money.

Culturally and socially, work is the spine that runs through a stable society. Not only is it the best way to increase your earnings, but it provides purpose, responsibility, dignity. It offers role models for children, and it builds community spirit.

Conversely, an ever-growing body of research has shown that inactivity not only reduces your financial well-being, but is directly linked to poor mental health, substance abuse, and in the very worst cases, suicide. Fundamentally, as Benjamin Franklin once observed, ‘It is the working man who is the happy man … [and] the idle man who is the miserable man.’

According to Smith, the idle man is the kiss of death for society because “worklessness” hurts the individuals who should be earning their success alongside society at large. Smith quoted the Father of Economics, Adam Smith, saying:

No society can surely be flourishing and happy of which by far the greater part of the numbers are poor and miserable.

Sadly, the U.S. government has taught people “learned helplessness.” It has allowed people to become so permanently dependent upon it for money, food stamps, and other financial assistance that it makes more sense for these dependents to stop pursuing work at all. This mindset bleeds into new generations of dependence and, as a result, the vicious cycle continues.

Of course, Smith carves out exceptions for individuals who need assistance, citing sickness, disability, and “times of desperate hardship” as examples of when the state should step in to help carry them through their struggle. Even then, he argues, assistance should be less about sustenance and more focused on the journey of helping shift from dependence to independence.

It is both expensive and unconservative to manage and maintain those at the bottom rather than give them the opportunity to take back control of their lives.

For all the changes that must happen for the United States to implement the proper laws and procedures — and adopt the advantageous mindset that enables individuals to prosper through the blessing of work, Smith remains hopeful.

With a new administration, the United States has now a golden opportunity to give welfare the reform it so urgently needs.

There is the potential to create a welfare system that recognizes with compassion the situations people find themselves in, but ensures fairness for the taxpayer.

One that is more conditional, less chaotic, more dynamic.

But, above all, one that is about life change, enabling people to transform themselves and their families.

As Springsteen sings, “We Take Care of Our Own.” But the ability to pursue our happiness is easier when families are able to take care of themselves.

How Airline Apathy Explains the Need for School Choice

If you’ve ever been stranded at an airport — or gotten involved in a debate over school choice — you can certainly empathize with Frederick Hess, director of education policy studies at AEI.

In a sarcastic and slightly cranky opinion piece, Hess details a bad stroke of luck with American Airlines that ultimately prevents him from delivering an important lecture despite trying every maneuver possible to rebook flights, book car rentals, and hightail it through an airport.

So I bolted off the plane, asking the ‘helpful’ lady guiding us to our transfer gates to please just let the gate know I was coming (she said she would). I didn’t make it. Well, by dashing up and down escalators and such, I actually made it there just in time, barely 10 minutes before departure—but the agent had already closed the door and was nowhere to be found. The idle American agent at the gate 20 feet over didn’t much care, even though an impartial third party might’ve thought I merited at least a modicum of consideration—given that I’d spent a big chunk of my day trying to juggle air reservations and rental car plans to accommodate American’s struggles.”

This kind of experience, unfortunately, isn’t all that rare. Travelers get the raw end of the deal at the mercy of airlines all of the time – even though they are paying for their airline ticket and trusting said airline with delivering them in a safe and timely manner.

So why is Hess’ experience important?

Because he makes an analogy that is an excellent window into the experience of many parents when it comes to their children being stranded in a school system that drops the ball time and time again. Only with education, the stakes are much, much higher, as Hess notes.

I’m annoyed today less because my flights were goofed up (which happens), and more because no one who works for the airline seems especially interested in doing anything about it. I would feel infinitely more chipper if I felt like someone really wanted to help ensure that the problem got solved. Instead, I’m staring at the face of a big, bureaucratic morass, a face which displays a remarkable lack of passion for doing the job well.

This happens time and again when it comes to big bureaucracies. Nobody seems all that concerned about helping out, preferring instead to spout lots of stuff about policy and procedure. We can never get hold of anyone who really seems to be in charge, and it can feel like the whole process is devoid of accountability or genuine human concern.

This frustration is at the heart of the school choice debate.

The bureaucracy of public education has been attacked and debated for years. There’s no changing that. And with bureaucracies of all kinds being laden with deficiencies, it’s not a surprise that education is also a victim.

As Ronald Reagan so aptly noted,

Every once in a while, somebody has to get the bureaucracy by the neck and shake it loose and say ‘stop what you’re doing.’

But it’s important that we not throw our hands up and end on a pessimistic, fatalistic view of education. The variable that Hess highlights is crucial to understanding the motive of school choice advocates – and the ability to improve Big Education by employing educators who work with passion and purpose. It’s not a question of for-profit motives, it’s about finding “smaller, more human-sized” school systems:

Hess is a physical traveler just as all parents navigate schools in the hopes of providing the best education possible to their children. If he had been given some semblance of genuine effort to help him reach his destination, Hess could have made his flight. Or even if he didn’t, he could have walked away knowing the best attempt was made by American Airlines to uphold their end of the deal.

That’s not asking too much, is it?

Likewise for parents, school children ought to be given every opportunity to receive the best education, not just the one they are stuck in because that’s where Mom or Dad pays rent or their mortgage. When kids are not afforded that opportunity because the bureaucratic mess of Big Education gets in the way and their education fails them, Mom and Dad become cranky too. Or downright angry, and justifiably so. Because we all know how important education is for setting a child up to pursue happiness and success.

What we all want, I think, in an airline—and a hundred times more in a school—is that professionals exhibit a passion for doing their job well. For figuring out smart ways to solve problems. For execution.

As for the children who’ve had the benefit of school choice, but still fail? Well at least they had access to their best shot. Just as flights will be missed, children will fail. There are countless reasons why. But having the confidence that every effort was made on his or her behalf is a whole lot more palatable than watching employee after employee halfheartedly clock in and out with no desire to help you reach your final destination.

Are Happiness and Economic Growth Linked?

What makes you happy? Family, friends, a strong community? How about “economic growth”? It’s not typically a buzzword to trigger your feelings, but a recent report suggests a nation’s economic growth is a variable in one’s personal happiness.

Of course, happiness isn’t entirely dependent on economic growth, a national measurement for determining overall well-being, but it sure seems like it could contribute to one’s personal outlook.

Yet, the authors of a global happiness report seem to suggest that happiness and economic growth have been decoupled. The just-released World Happiness Report 2017, an annual survey of 1,000 people in 150 nations, places the United States just 14th on the happiness scale. But the real eyebrow raiser is that the top five happiest countries are facing slower economic growth.

Business journalist (and Former Jeopardy champ) James Pethokoukis says that the authors’ conclusions may not be what they seem.

One of its highly touted findings is that the world’s ‘happiest’ nations — such as Norway, Denmark, and Switzerland — have been growing more slowly than the world as whole, including the number 14-ranked U.S., in recent years. The report’s authors fully embrace the goal of pushing ‘happiness’ as the best measure of social progress rather than the ‘tyranny of GDP.’ And as economist Jeffrey Sachs writes in the report: ‘The predominant political discourse in the United States is aimed at raising economic growth, with the goal of restoring the American Dream and the happiness that is supposed to accompany it. But the data show conclusively that this is the wrong approach.’

But is that what the data show, really?

The whole thing seems a little weird when you take a closer look.

Pethokoukis points out that the “happiest” countries are small — an average of 11 million people in the first 13 happiest nations (all the ones in front of the U.S.). The “happy places” are also culturally homogeneous, particularly the Nordic nations that rank at the top. Not surprisingly, the happiest nations also want faster-rising incomes, a component of economic growth.

Pethokoukis adds that the happiest nations also suffer from their fair share of unhappiness, with high suicide rates (and possibly low expectations)! Americans, on the other hand, “are demanding, complain when dissatisfied, and by the way, also produce the hard-driving entrepreneurs like Steve Jobs and Bill Gates who push the technological frontier so Europe doesn’t have to.”

He suggests national identity and culture most certainly has to be a variable in the happiness outlook, but more than that happiness isn’t the result of higher incomes, but more opportunity to create “a life of deeper human flourishing.”

Read Pethokoukis’ full article.

Paid Family Leave: Economical Conclusions From Three U.S. States

President Trump, with the encouragement of his daughter Ivanka, has been promoting paid family leave as a means to help families with income and work after the birth of a child or to care for a loved one who falls ill.

Democrats have long supported such plans, and Republicans are coming on board. That may surprise many who think paid leave is an unaffordable boondoggle, but the evidence overwhelmingly suggests that it’s a boon not a boondoggle, and not just for families but for businesses that offer paid family leave.

Evidence of such claims are drawn from studies of paid family leave programs in three U.S. states — California, New Jersey, and Rhode Island — that have already implemented such programs. Here are some of the conclusions:

  1. Paid leave raises the likelihood that a new mother will remain in the labor market, which can help boost her lifetime income and contributes to our economic productivity overall.
  2. Women who take parental leave are less likely to suffer from maternal depression and are more likely to  breastfeed— and do so for longer periods of time — outcomes that are beneficial for the lifetime health and development of the child.
  3. Paid leave encourages men to help more at home, freeing up time for women if they want to work, which boosts household income and spurs economic growth.
  4. Fathers who take time off work at and around childbirth are more likely to be involved in childcare later in the child’s life. Children whose fathers are more involved in their early years perform better on language and cognitive tests, and social development than those with fathers who are less involved.
  5. Paid leave has had a positive or neutral effect on profitability, according to employers in California and New Jersey.
  6. State paid leave programs have helped employers recruit and retain talent, lower turnover, and boost morale and worker productivity.
  7. Paid leave reduces the burden on government assistance in states, suggesting potential longer-term positive budgetary implications.
  8. Uneven availability of paid family leave in the private sector ends up disproportionately benefiting higher-income workers, while low-wage workers also often lack other forms of paid days off that higher-wage workers can use for family leave. Providing low-wage, low-skill workers time to care for their families encourages work.

OK. There must be downsides, right? There are some potential hazards. These are some:

  1. States looking to develop their own paid leave policies will have to build an administrative structure to create the system.
  2. Lawmakers will grow government rather than make cuts to other programs to pay for paid family leave.
  3. Employers dropping their paid leave programs for a federal program could become quite expensive if not offset.
  4. There is the potential for “time-off creep” and the costs associated with that. Paid leave programs are now only four-six weeks, but New York has already passed a law to make its program 12 weeks.

The United States is the only developed country in the world that does not offer paid family leave.

What do you think of implementing a universal system?

Read the entire blog series on the impact of paid family leave.

March 4 Is National Grammar Day: Don’t Mess Them Up

In English literary custom, the rules of the English road leave many scratching their heads. That’s probably why there’s a National Grammar Day, which happens to be on Saturday.

English is considered a very difficult language for some who learn it as a second language (or even as a first), and it’s no surprise. Here are some confusing English lessons:

  • “Homonyms” are words that sound alike but have different meanings.  (ex. “lie” – to rest one’s body, and “lie” – to not tell the truth).
  • “Homophones” are two or more words that sound alike but have different meanings and also different spellings (ex. “led” – to have been in charge of an event, like a meeting; and “lead” – a metal).
  • “Homographs” are words that are spelled the same but have different meanings (ex. “fair” – the county kind, and “fair” – equal treatment).
  • “Heteronyms” are words that are spelled the same, but don’t sound alike and have different meetings (ex. “tear” – a rip, or “tear” – liquid falling from your eye).

What’s the difference between a homonym and a homograph? Answer: Whether you get a headache thinking about it (ba-dum-tsss – an onomatopoeia).

ba dum tss photo: Ba dum tss Bateria_zps300f38d1.gif

America spends a fortune trying to educate its citizens, but even someone as lofty and educated as the president gets it wrong from time to time. President Trump wrote “Hearby” rather than “Hereby” on a tweet on Friday before correcting it.

020217 Potus hearby

Yet for all the harassment handed down when a high-profile person makes an error, social media memes earn extra guffaws when they include common grammar mistakes while the creator tries to project how smart he is.

Sometimes, they are uproarious. Here’s one. Can you spot the error?

030317 Grammar meme

IT’S one of Mark Perry’s biggest peeves. Perry, whose Carpe Diem blog regularly serves as a keeper of the language, points out several other fun grammatical errors in celebration of National Grammar Day.

Whether you’re perfect or not, enjoy Perry’s list on National Grammar Day, and if you need help, you can call the Grammar Hotline at University of West Florida.

Do you have a meme with an obvious error in English? Share it (or not) and let us all learn from our mistakes.

Shock Story: Exploiting the Homeless Addicted for Profit

The Washington City Paper is reporting a completely distressing story entitled “Eviction Companies Pay  the Homeless Illegally Low Wages to Put People on the Street.”

The headline pretty much says it all, but some of the details are worth noting. First, the homeless are coming to a local shelter in D.C. each day hoping to be picked up as day laborers. Second, the men (and occasional) women homeless people are generally dealing with addiction of some kind. Third, the eviction companies have already been sued about paying below minimum wage to these day workers. Fourth, while the payments are extremely low, the companies pick homeless addicts and occasionally supplement the wages with alcohol. This effectively drives the homeless addicts to get another fix, which then leads them back to the miserable work arrangement so they can get just enough money to get another boost.

Here’s part of the report:

A man who works for both Street Sense and on the trucks, who is homeless and did not want to be named for fear of retribution from the eviction companies, says he first got work on an East Coast Express Eviction truck right after he moved to D.C. several years ago. He had heard through the grapevine that employment was available outside S.O.M.E. and was surprised to find that he did not need to fill out paperwork. When he first got on the truck, he says he saw a cooler of beer, and thought, “I’m in the right place.” It seemed like a party—and it was—drinking in a van with other guys before work. But he soon learned that whatever he drank would be deducted from his pay at the end of the day. 

And he realized why the men were getting beers. “We have seen babies crying, grandmas. … You get a beer, so you don’t have any emotion,” he says in an interview at the Street Sense offices. “You do some kind of drugs, so then you don’t care, so you leave them on the curb over there crying, and go on to next one.” He says the evictees don’t get any information either—no shelter listing or hotline number.

The man, who struggles with a drinking problem, also says it was no mystery to him why eviction companies continued to show up outside S.O.M.E. even after the lawsuit. “Instead of choosing someone professional who says, ‘I can’t do it,’ they choose people who don’t have any feelings anymore, and have given up on life,” he says. “Because they will get on this truck for $7.”

As poverty and homelessness research Kevin Corinth states, this is NOT OKAY. But Corinth’s reasons aren’t exactly what you may expect. For one, he’s not arguing about whether the minimum wage is “fair.” Nor does Corinth have a problem with eviction in principle. Even as a researcher on homelessness, he acknowledges that landlords have a legal and moral right to be paid rent for providing living accommodations. As he points out, “Stopping all evictions would mean that landlords would no longer be willing to accept the tenants who are at greater risk of defaulting on their rental obligations in the first place.”

Corinth also doesn’t have an issue with the person who would take the job of evicting a family because while difficult to do, a professional and empathetic worker can “help preserve a sense of humanity in the face of horrible circumstances.”

For Corinth, his issue is with eviction companies that would exploit homeless people with addiction to keep them coming back to the illegally low-wage job.

Rather than being encouraged to serve with professionalism and empathy, they are encouraged to numb their humanity with alcohol.

And that means that families at their lowest point are dehumanized as well. Their personal belongings are handled by crews of men who have shut down. Meanwhile, some workers reportedly engage in theft to supplement their wages. As ACLU attorney Scott Michelman puts it, “[l]osing your home shouldn’t mean losing your dignity.”

Corinth says there are solutions to the mistreatment of homeless addicted aside from taking these companies to court. They include relaxing regulations on how many workers must be used to clear out a house, which leads eviction companies to look for cheap, unqualified work crews.

Another solution could be to prevent evictions from happening in the first place. Recent research has shown that offering families who are at risk of homelessness modest one-time payment leads to sharp reductions in entries into homeless shelters (and presumably reduces evictions as well).

The research Corinth references is here. It notes that one funding experiment found that giving someone who is about to become homeless a single cash infusion, averaging about $1000, could delay homelessness for two years. The research was done by offering one-time cash payments “to people on the brink of homelessness who can demonstrate that they will be able to pay rent by themselves in the future, but who have been afflicted by some nonrecurring crisis, such as a medical bill.” The team found those who received the cash infusion were 88 percent less likely to become homeless after three months and 76 percent less likely after six months. That’s a worthwhile investment when the overall cost of homelessness to society is much more expensive.

Read Corinth’s commentary here.

How the ‘Fight for $15’ Movement Can Undermine Those It Aims to Help

What’s a better solution — higher wages at the cost of jobs, or more jobs with lower wages? If you’re interested in seeing more people working, the latter is the better option. But it’s not just about the number of jobs. That’s why several economists question the logic of the “Fight for $15” movement or other minimum wage arguments.

“While boosting wages for workers is critical, helping workers retain their jobs and stay on the income ladder makes more sense for the economy.”

That’s what economist Aparna Mathur says in a recent article on wage hikes. Even the nonpartisan Congressional Budget Office found that raising the federal minimum wage to $10.10 per hour could result in the loss of 500,000 jobs.

Mathur notes that several localities aim to increase the minimum wage in the next few years, and warns that the people who are going to feel the impact are the very workers who would benefit from not being paid a mandated minimum wage.

Most policies of any kind involve trade-offs, and minimum wage hikes are no exception. When the government mandates that employers must pay minimum wage workers more — i.e., the hike is not because of any increases in productivity or skills — employers will strategize about how to recover the added costs. Can they pass them on to their customers? Should they invest more in automation? And of course: Should they decrease the size of the work force?

Mathur acknowledges disagreement in the economic models on the impact of minimum wages. It’s more than just the number of jobs available, the minimum wage is a variable with major reverberations to the overall economy and how work is conducted.

She points to the results of a major recent study by New York University. The study found that raising the minimum wage had a small impact on the overall drop in hiring, but a much greater impact on the amount of work each worker is doing.

Hours worked fell sharply, with reductions as large as 3% across all workers and 25% for the lowest-wage jobs. Presumably, the study’s authors wrote, some of the reduction was caused by employers economizing on labor. However, they also wrote, hours worked also likely fell because employers hired more productive workers.

And what’s the outcome for managing less productive employees? Automation, of course. The dreaded “robots.”

Many stores and fast-food restaurants are already planning this transformation. For example, McDonald’s plans to move away from cashiers to touch-screen kiosks nationwide and to allow mobile ordering rather than pay an employee $15 an hour to bag French fries. Wendy’s is considering a similar move. Walmart already is automating many positions that employ hourly workers.

Mathur says if government policy really wants to help low-wage workers, it could try more creative approaches, like the Earned Income Tax Credit program.

A targeted program with no risk of job loss, the EITC has been proven to lift people out of poverty, and it is the best way to boost incomes for poor households.”

At the same time, encouraging upgraded skills for workers through greater investments in on-the-job training and paid apprenticeship programs for younger workers would allow for greater upward mobility even for workers starting off in minimum wage jobs.

While boosting wages for workers is critical, helping workers retain their jobs and enabling them to move up the income ladder is even better. The risk of job loss that comes with a minimum wage hike threatens the ability of these workers to get on that ladder. States that are on track to approve such an increase should proceed with tremendous caution.

Wage Hikes: Proceed with Caution

Michael Novak’s Legacy: Welfare to Work Is Social Justice

“America’s system of democratic capitalism represents a fusion of our political, economic, and moral-cultural systems. No facet can exist apart from the others.”

This was the central thesis in the book “The Spirit of Democratic Capitalism,” written by Michael Novak and published in 1982. It’s not the only book he wrote on the subject.

Novak died Friday at age 83 and he is remembered as a titan of intellectual thought. He is the progenitor of the 1996 Welfare Reform Law, which originated from conclusions laid out in the 1987 proposal for A New Consensus on Family and Welfare that Novak presented to President Ronald Reagan. It was the first major policy statement to suggest a work requirement in exchange for welfare aid.

That policy took shape among 35 other books that Novak wrote during his life.  Social justice was the general theme in his life’s work, and it is an outlook that helps guide new policy, like Wisconsin Gov. Scott Walker‘s latest attempt to encourage a work requirement in exchange for government assistance. Opponents of the idea try to cast it as cruel, but from Novak’s point of view, work is dignity, and the state’s support of the individual without any incentive to engage with larger society is the truly socially unjust act.

As Flavio Felice described it in a 2016 essay, Novak’s notion of social justice meant everyone is a contributor to the greater good, if only for the benefit of one’s personal growth.

According to Novak, “social justice” rather expresses the decisive rejection of individualistic sentiment, on the basis of a social anthropology in which the main actor is the “person,” which he understands as “individual and community”—the ontological, epistemological, and moral center of social action. In this way, in free societies, citizens are inclined to use their own tendencies to associate, to exercise new responsibilities, and to move towards social ends. In this sense, “social justice” is the particular form taken today of the ancient virtus of justice. Therefore, it does not necessarily involve the strengthening of the presence of the State, but rather, the development of civil society, in keeping with Hayek. In the words of Luigi Sturzo, a beloved author of the same Novak: “Nothing therefore exists of human activity, which, though originally individual has no associated value; nothing among men can come into being, which does not mention any form of association.”

Similarly, the most dangerous enemies of “social justice” appear the same as denounced by Sturzo on his return to Italy from his twenty years in  exile (1924-1946), which he identified as the “evil beasts of democracy:” “statism, particracy, waste of public money.” In practice, for “statism” we mean the false belief that, by entrusting to “the State activities for productive purposes, connected to a restrictionism that stifles the freedom of private initiative,” we can “make amends for inequalities” (Sturzo). Such a degeneration in the task of the State, which denies freedom, favors “particracy”, that is, the irresponsible interference of political parties and trade unions in legislative functions, which negates equality. A corollary of the first two “evil beasts” is the “waste of public money” which would violate justice.

That’s a heavy dose of philosophy, but as Felice summarized, “The work of Novak and (cowriter Paul) Adams puts us on guard against easy shortcuts, which are so often accompanied by rhetorical proclamations and authoritarian pretensions unsuited to a society of free men.”

Novak was a counselor of popes and politicians whose gentle and warm personality made him a beloved figure to many. His legacy lives on in good social policy.

Confessions of a Catholic Convert to Capitalism

Care for the vulnerable is not unique to one religion. All major philosophies share this goal, religious or otherwise. But how does religious belief intersect with capitalism?

Many goodhearted people mistrust markets. They believe that free enterprise worsens inequality and encourages greed and materialism. Many worry that capitalism sows division and economic exclusion. These fears are reasonable.

But rejecting free enterprise is the wrong approach. In a recent essay in American Magazine, I wrote that free enterprise is not inherently moral or immoral. However, it is humanity’s best tool for alleviating mass-scale poverty. It empowers billions of people to build happier lives filled with work and security.

I know you care about these big questions as much as I do.

Here are a couple excerpts from the article about my journey toward Catholicism and the free market.

As a Seattle-born bohemian living in Barcelona, my political views were predictably progressive. But my thinking began to change in my late 20s upon returning to college, which I did by correspondence while working as a musician.

I fancied myself a social justice warrior and regarded capitalism with a moderately hostile predisposition. I ‘knew’ what everyone knows: Capitalism is great for the rich but terrible for the poor. The natural progression of free enterprise is that the rich and powerful accumulate more and more of the world’s resources while the poor are exploited. That state of affairs might be fine for a follower of Ayn Rand, but it is hardly consistent for a devotee of Our Lady of Guadalupe. Right? …

As I taught about the anti-poverty properties of free enterprise, a common objection—especially among my Catholic friends—remained. ‘Okay,’ many said, ‘I see that markets have pulled up the living standards of billions, and that’s great. But they haven’t pulled people up equally. In fact, capitalism has created more inequality than we have ever seen.’ This spawns ancillary concerns about the rich getting richer at the expense of the poor, and the rising inequality of opportunity. My challenge as a Catholic economist was to answer these questions in good faith.

The evidence on income inequality seems to be all around us and irrefutable, particularly in the United States. From 1979 to today, the income won by the ‘top 1 percent’ of Americans has surged by roughly 200 percent, while the bottom four-fifths have seen income growth of only about 40 percent. Today, the share of income that flows to the top 10 percent is higher than it has been since at any point since 1928, the peak of the bubble in the Roaring Twenties. And our lackluster ‘recovery’ following the Great Recession likely amplified these long-run trends. Emmanuel Saez, a University of California economist, estimates that 95 percent of all the country’s income growth from 2009 to 2012 wound up in the hands of the top 1 percent.

Taking this evidence on its face, it is easy to conclude that our capitalist system is hopelessly flawed. Digging deeper, however, produces a more textured story.

Please read the essay and let me know what you think on Twitter @arthurbrooks. If you enjoy it, pass it along to a friend or colleague — especially someone who is skeptical of capitalism.

An Engineer-Turned-Baker on What Is the True Quality of Life

Income differences are often the result of career choices and training. For instance, few would expect an engineer and a baker to make the same amount of money. But what if you’re an MIT-trained biomedical engineer who decides to open a bakery? Well, clearly, the true quality of life isn’t based on finances, but on fulfillment.

In other words, if baking means happiness, all the money in the world couldn’t make Winnette McIntosh Ambrose happy being just an engineer. That was the quality-of-life choice that led Ambrose to open Sweet Lobby Café in Washington, D.C.

“It was during my post-doc at the NIH that I had the bright idea, I could do two careers. Why don’t I open a shop while doing my post-doc?” Ambrose said of her decision. Eventually, however, “as a small business owner I had to decide where to place most of my energies.”

Ambrose said years later, she still gets questions about why she left her “prestigious” career to focus on patisserie cupcakes. Most people are curious how she would leave her breakthrough field of vision-saving technology for something less “honorable” or “needed.”

She has a quick answer. Ambrose says that whatever one’s career choice, it is the right one if it means fulfilling a higher calling.

“As human beings we’re created with one purpose, and that is to glorify God, and that is to do that in whatever sphere we might be in,” she said.

Being a business owner means working long hours, but it also means flexibility. For Ambrose, it was attractive to have a few hours in the morning to play with or take her child to school. She learned along the way, that having a business also meant she could expand quality of life for others.

“I started to consider what a business really meant. It meant a community where you can provide employment. You have the an opportunity to do something else that I love, which is to mentor young people … It’s amazing to see the transformation.”

Winnette McIntosh Ambrose’s story is part of a new documentary called “To Whom is Given,” which looks at business owners’ faith-based decisions to help the common good.

Click here to learn more about Winnette McIntosh Ambrose and “To Whom Is Given.

How Journeyman Electricians Were “Gifted” a Second Chance to Succeed

Second chances are easier said than given. But that doesn’t mean there are no second chances. In fact, one electrical engineering firm decided that it was going to invest in second chances, and since then, business has snowballed.

For the Weifield Group in Denver, Colo., it was an evolution, and then, ultimately a conscious decision by the company’s owners to create an environment where people were involved in something bigger than themselves.

Why? Not just because it generated a lot of work, but because it was the best way to take advantage of “God’s gift” of leading a profitable business.

“God gives you different ‘giftings’ and if you have that business gifting and you can excel within business, you can capture a big audience,” said Karla Nugent, chief business development officer of the Weifield Group. “Being in that position where you can have an area of influence and affect people positively is powerful.”

“The construction market can be a rough trade,” said Seth Anderson, CEO of the Weifield Group. “We decided, ‘Hey we can do this, we can do this better. We can provide the quality. We can provide a good place for the employees to learn and develop new skills.'”

Becoming a journeyman electrician takes four years of training to complete. Weifield decided to start an apprenticeship program that lasts up to four years. That includes 40 hours of work per week and health insurance.

For many of Weifield’s 300 employees, that kind of on-the-job training has been a lifesaver. Many of its apprentices are ex-felons or recovering addicts who truly needed second chances. Being an ex-felon often makes it difficult to find decent work.

Not every company can afford to provide this kind of intensive and expensive on-the-job training. But for Weifield, it enabled the company to raise its skills and provide solutions to serve its clients, and it raised its business game to the next level.

Now that the company is flourishing, Weifield is expanding its outreach to help charities and community organizations, not just run a company.  Nugent said the growth is no surprise even as it keeps changing.

“We’re all blessed with all these talents, you know? How do you do something that’s bigger than build a building?”

The story of the Weifield Group is part of a new documentary called “To Whom is Given,” which looks at business owners’ faith-based decisions to help the common good. Learn more about the Weifield Group and “To Whom Is Given.”

Best Friends, Opposing Views: Getting Along in the Age of Disdain

In a world of “fake news” and “filter bubbles,” can you really maintain friendships with people who disagree with you?

If Robbie George and Cornel West are any indication, the answer is not only yes, but that people on “the other side of the aisle” can be the best of friends.

These two professors, one at Princeton, one at Harvard, were introduced by Andrew Perlmutter, a then-religion student starting a campus magazine at Princeton. The magazine’s inaugural issue had one professor select another for an interview. West selected George.

The interview between the two, who had never met, was supposed to last an hour. It lasted four and a half hours.

“There’s no doubt that our spirits and our souls resonated, and intellectually we were both on fire talking about the great classical economical texts,” West said.

That’s when they decided to teach a class together. The 12-books to be studied that first semester spanned Plato to Martin Luther King, Jr. The two continued the class for 10 years, together selecting the texts for future seminars.

Recently, the two men got together to discuss their relationship, the purpose of studying liberal arts, and the value of finding common ground with people you may not otherwise know. Ultimately, George concludes, the examined life may not be pretty, but it is well-lived. And it doesn’t have to be in an ivory tower.

“The key element of the liberal arts is self-mastery” George said. “Self-mastery doesn’t require a college education.”

Philosophically, the two couldn’t be more different. West is a liberal who supported Bernie Sanders over Hillary Clinton. George is a conservative who said was threatened with “excommunication” from the right for not supporting Donald Trump. The two said their criticism of the political party lines was a matter of commitment to their values and a “quest for integrity, honestly, and decency.”

“It’s not pure, it’s not pristine, but it has much to do with how we were raised,” West said. “It has much to do with the choices we make in terms of our religious Christian faith. It has something to do with the traditions that we choose to be a part of, and also how we choose to die, that we intend to be faithful unto death.”

Want to Work? Then Don’t Wait For Universal Basic Income

I recently read an interesting series of memos that propose three possible futures for the U.S. economy. This suite of essays, published by the Knight Foundation, merit a read if you’re interested in innovation and techno-futurism.

Their most optimistic scenario includes a version of a “universal basic income,” a popular policy idea among academics. The UBI would replace most complicated, conditional welfare programs with a straight-up minimum income guarantee that everyone receives from the government simply for being alive. (Nice work if you can get it!)

The UBI is the rare idea that garners support from both liberal and conservative intellectuals. Progressives like the idea of a generous and unconditional benefit for anyone who needs it; conservatives like the idea of replacing messy bureaucracies with a much clearer and more concise policy.

Unfortunately, on this front, I am the skunk at the garden party. As I wrote in a drive-by Medium response to the Knight memo, simply conceding a “post-work” future and paying everyone a salary to breathe is a poor substitute for the tougher job of actually getting people back to work. As the memo rightly notes, there are huge costs to simply cutting work out of people’s lives, even if you mitigate the financial aspect.

You can read the Medium post for my favorite research on this, but here’s one sample. Running my own statistical analysis on some survey data, I have found that Americans who have a job and feel successful at it are more than twice as likely to say they’re “very happy” than people who don’t meet those conditions. Importantly, this holds up when you control for income. Put simply, having a reason to set our alarm each morning gives us a psychic benefit that goes way beyond a paycheck.

What’s the better, more meaningful solution? How about we try a radical new agenda for forming human capital that empowers more Americans to stay engaged in the economy, rather than making it less painful for them to drop out?

‘Psychic Numbing’: How to Avoid Desensitization to Bad News

No matter your political inclinations, we can all agree on one simple fact: 2016 was a crazy year. Anger and resentment became political focus points across the Western world. Foreign policy crises, especially the Syrian civil war, burned hot throughout the year with little resolution in sight. A flow of corruption and scandal took down government leaders on at least three different continents.

Everyone that you talk to will find different elements in 2016 to condemn and to celebrate. Some will be outraged by the presidential election results but thrilled with the big leftward steps taken in our culture and popular media. Plenty others will have precisely the opposite view, pleased with political victories but deeply unsettled about the broader direction of society. Wherever you fall personally, it seems safe to say that nobody will remember the last year as an apogee for optimism, warm-heartedness, or American unity.

What does that mean for your 2017? In the face of events or trends we dislike, it can be tempting to try to simply care less about the world around us. When the cable news gets too wearying, it seems like we should simply turn off the TV. Perhaps the prudent path forward is to pull up the informational drawbridges that connect us to the world and redirect our attention inward.

There is something noble in this instinct, but there is also something dangerous and destructive. A little social science can help us discern what to do.

First, let’s remember that these sensations are nothing new. Tragedies have always been part of life. That means there’s a surprisingly robust academic literature on the subject. And so, over the past couple of weeks, when I wasn’t hunting with my son Carlos over the holiday, I dug into some of the research that looks at our response to large-scale traumatic events. (First prize for nerdiest dad!)  I outlined my findings in a recent New York Times column, but here are some of the basic takeaways:

It turns out that social scientists have a term for when people simply throw up their hands in response to overwhelming circumstances: “psychic numbing.” Some of the most interesting research on this topic comes from Paul Slovic, a psychologist at the University of Oregon. His body of work shows that when tragedy is large in magnitude and in a distant location, we become desensitized. Recent history shows us some of the depressing  implications. For example, while many of us feel compassion for the refugees fleeing war-torn countries in Africa and the Middle East, the organized response to such events is muted at best.

Slovic wasn’t the first academic to talk about “psychic numbing.” Any fan of Adam Smith will likely recall a famous passage from his Theory of Moral Sentiments where Smith discusses Europeans responding to an apocalyptic earthquake in China:

“If he was to lose his little finger tomorrow, he would not sleep tonight; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”

Is there a solution to psychic numbing? Is there a better way forward than either feeling constant despair about events we can’t control or cauterizing our normal human empathy?

Absolutely, but it’s not what you might presuppose. When we hear of these tragedies, we often rush to grasp the big picture. Collect the data. Gather the evidence. Figure out what systemic changes we can demand from on high. Let me propose that this thinking is part of the problem. What if the real solution, on a personal level, is to do the opposite of “thinking big”?

Any readers who work in fundraising have likely heard some version of the saying, “One is greater than one million.” No, this isn’t bad math. It’s the real-world application of a “think small” philosophy. As I wrote in the Times column, “when it comes to people in need, one million is a statistic, while one is a human story.” Thinking small can simultaneously allow us to continue paying attention to trends or events that disturb us – but by focusing on individual victims instead of just on global systems, we are limiting the scope of our empathy to circumstances we may be able to actually improve through our own efforts.

I have seen this “1 > 1 million” axiom at work in my own life. As you all know, of late, there has been an incendiary bipartisan backlash against globalization and the notion of an interconnected world. But despite the short-term shift in political winds, my own personal and scholarly appreciation for globalization has only grown stronger.

I have contemplated the many lives I personally know who have been saved by our globalized world.  While I know of many such stories, my thoughts always return to daughter, Marina. My wife and I adopted her 12 years ago from an orphanage in China. Fifty years ago, that would have been virtually impossible, and it would even be more difficult today, thanks to misguided government policies that limit foreign adoptions. Her presence in my life is not only a profound blessing. It is also a simple reminder that the walls of protectionism and restriction don’t only wall off the movement of physical capital and traded goods. They also close the valve of opportunity for millions of children around the world.

Here’s a little challenge for the beginning of this new year. Look back on the events or trends that disturbed you most in 2016. Then, instead of thinking about global, symbolic protest movements you can join or systematic changes you can demand from on high, contemplate a practical way to familiarize yourself with one human being who has been affected. Then, find a way to concretely help that individual.