Words Matter: The Power of Speech in Changing Minds

Words are powerful, and, when used well, they can incite people to both good and evil. They give those in positions of power, well, power – and lots of it. And, thanks to the Bill of Rights, specifically the very first item on it, people can say almost anything with presumably no consequences. This means when someone with influence says something publicly, it can have a huge impact on society.

While everyone has the right to say whatever he or she wants, those with influence over audiences have the responsibility to exercise their free speech with vigilance. While speech can be, and is, used benevolently, it is also used nefariously. Examples of either are unneeded here; the evidence for both is plentiful and ever growing.

The media are not the only ones with this responsibility. Anybody who has influence over any number of people is aware of the impact of their words. Words matter, and saying certain things can have unforeseen consequences. The expression “Be careful what you wish for” wasn’t created in a vacuum.

A gut-wrenching story illustrates the importance of this responsibility on a very personal level. In Massachusetts, a woman was found guilty of involuntary manslaughter for sending text messages to convince her boyfriend to commit suicide. She continually told her boyfriend to get back into his truck while it was filling up with carbon-monoxide. While she is protected under the First Amendment to an extent, the consequences of her words are too real to be ignored. She ignored her responsibility to exercise this right with caution and is being punished for her “reckless conduct.”

The recent shooting of Louisiana Rep. Steve Scalise offers a lesson as well. A distraught Bernie Sanders supporter, angry over the recent election of Donald Trump, found it necessary to travel to Virginia from Illinois and open fire on a group of Republican lawmakers. The shooter may have been tackling other mental illness issues at the time, but is it possible all the toxic, and sometimes violent, rhetoric against President Trump pushed this man to do what he did? Would he have not done what he did if he weren’t influenced by media outlets he followed constantly attacking Trump, making the president seem more evil than Satan himself? We will never know with certainty since the shooter is now dead, but the rhetoric can’t be written off.

And here’s why we can’t just look the other way (so to speak!) — because to say it had no influence in the commission of the crime is to deny that speech can also bring good.  National Review columnist Jonah Goldberg well-articulated the relationship between free speech and action.

I have always thought it absurd to claim that expression cannot lead people to do bad things, precisely because it is so obvious that expression can lead people to do good things. According to legend, Abraham Lincoln told Harriet Beecher Stowe, ‘So you’re the little woman who wrote the book that started this great war.’ Should we mock Lincoln for saying something ridiculous?

As Irving Kristol once put it, ‘If you believe that no one was ever corrupted by a book, you have also to believe that no one was ever improved by a book. You have to believe, in other words, that art is morally trivial and that education is morally irrelevant.’

If words don’t matter, then democracy is a joke, because democracy depends entirely on making arguments — not for killing, but for voting. Only a fool would argue that words can move people to vote but not to kill.

Goldberg also points out that the First Amendment was built on an effort to stop leaders from murdering in the name of religion.

Ironically, free speech was born in an attempt to stop killing. It has its roots in freedom of conscience. Before the Peace of Westphalia in 1648, the common practice was that the rulers’ religion determined their subjects’ faith too. Religious dissent was not only heresy but a kind of treason. After Westphalia, exhaustion with religion-motivated bloodshed created space for toleration. As the historian C. V. Wedgwood put it, the West had begun to understand ‘the essential futility of putting the beliefs of the mind to the judgment of the sword.’

This didn’t mean that Protestants instantly stopped hating Catholics or vice versa. Nor did it mean that the more ecumenical hatred of Jews vanished. What it did mean is that it was no longer acceptable to kill people simply for what they believed — or said.

But words still mattered. Art still moved people. And the law is not the full and final measure of morality.

All in all, freedom of speech is a considerably large power given to the residents of this country. And, in the words of one well-known superhero’s uncle, “With great power comes great responsibility.”

The Problems With Seattle’s Minimum Wage Debate

Recently, a University of Washington study released on the impact of raising Seattle’s minimum wage from $11 to $13 in 2016 showed some disturbing effects. It revealed that the number of minimum wage jobs declined and while lower-income workers were making higher wages they were employed fewer hours, resulting in a net loss in wages.

The study, commissioned by the city, was so disheartening that the mayor of Seattle decided to get another study done that would show better results. But trying to come up with another study that proves an argument because the implications of the first are not what were expected won’t help the people impacted by the policy. It will keep the minimum wage debate alive, though.

That said, some limitations to the University of Washington study, as pointed out by economist Michael Strain, show that Seattle’s experiment won’t end any time soon.

The data it used make it difficult in some instances to determine whether a particular job is in the city of Seattle or elsewhere in Washington state, and the study attempts to deal with this challenge by limiting its scope to workers at single-location firms. The data also don’t include contractors.

To determine the effects of Seattle’s minimum wage increase, the study compares hours and wages in Seattle to those in neighboring counties, before and after the Seattle increase. This is reasonable, but one could also reasonably be concerned that those neighboring counties are not the best comparison group. To address this possibility, the study uses more complex statistical methods. There again, it’s reasonable to question those methods — but not the conclusion that the Vigdor study materially advances our understanding of the effects of the minimum wage. It’s hard for me to understand how any economist could conclude otherwise (emphasis added).

At the same time, the Vigdor study is just one study. Should it increase our confidence that minimum wage increases can hurt low-wage workers? Of course. Does it prove that point for all time in all places? Of course not. The Vigdor study covers only one city. The economics of city-specific minimum wage increases are probably somewhat different from that of state or federal increases. It’s also hard to be sure that what happened when Seattle increased its wage to $13 per hour in the context of getting to $15 per hour can be generalized to what might happen if, say, Kansas City increased its minimum wage to a different amount in a different context over different years.

So where does this leave the debate over minimum wages? Right where it was before: confused.

In other words, the University of Washington study was conducted as professionally and with the best methodologies available to economists to sort through the information. But circumstances are not static, and trying to prove an overall argument of the minimum wage debate based on one city’s experience is an instance where politics gets in the way of policy.

The rise in the wage was part of a three-year plan to get Seattle’s minimum wage to $15 per hour. The last bump took effect at the beginning of this year. Concerns that the impact of such a sharp minimum wage increase hurts lower income workers are legitimate even as the impact of the final increase have yet to be determined.

The Vigdor study does not subscribe to a social policy. It merely points out the effect of the social policy chosen in Seattle for Seattle are not what the engineers had hoped. As Strain points out, popular solutions are not necessarily the best solutions for the people the solutions are targeted to help.

When thinking about whether minimum wage increases are good or bad, you have to think clearly about the social goal you are trying to achieve. If your goal is to help reduce income inequality and to increase the earnings of some middle-class households, then the minimum wage is not a crazy policy.

But if your goal is to help the least skilled, least experienced, most vulnerable members of society to get their feet on the first rung of the employment ladder and to start climbing, then the minimum wage is counterproductive. Its costs are concentrated among those vulnerable workers. It is an obstacle in their paths. It is bad policy.

Read the complete Strain article at Bloomberg

The Real Cause of America’s Declining Labor Participation Rate? Boys and Their Joysticks

A wily and widespread addiction has caused a massive epidemic among young men — one so bad that they are no longer working. This addiction has a name: video games. That’s right, video games have sapped America’s male youth of its ability to be productive, to function eight hours a day at a job. Their brains are fried.
That’s what you would conclude from media reports on a study titled “Leisure Luxuries and the Labor Supply of Young Men,” which states that between 2000 and 2016, young men have put a premium on leisure accounting for 23 to 46 percent of the decline in their market work.
The reason, according to the study’s authors: Young men would rather play video games.
The four researchers conducting the study found that young men worked 12 percent less time in 2012-2015 than in 2004-2007. At the same time, they dedicated 2.3 hours more to leisure activities. Eighty-two percent of that extra leisure time went to recreational computing and video gaming.
By comparison, men 31-55 only decreased their hours worked by 8 percent over the same period, but without the commensurate uptick in video game playing.
This is where that chicken and egg question gets cracked, and columnist James Pethokoukis concludes that “America faces a massive array of daunting economic challenges but Overwatch, Final Fantasy, and Call of Duty are not among them.”
First of all, it’s a red flag that the big gaps in hours and employment between younger and older men emerged during the Great Recession and Not So Great Recovery. There are lots of potential non-video-game explanations for this. For instance, employers might have started demanding more education or experience before hiring during a time of economic tumult. …
The big jobs event in 2007 wasn’t the release of Halo 3. It was the start of a severe economic downturn.
If the recession and recovery played a big role in young men working less, then work rates should improve the further we move into the economic expansion. And that’s exactly what seems to be happening.
The employment-to-population ratio — the share of a particular population with a job — for 20- to 24-year-olds fell to 61.3 percent in 2010 from 72.7 percent in 2006, the last full non-recession year. But that number has since rebounded to 66.2 percent. Is video game quality suddenly getting worse?”
Obviously, the answer to that question is no. Even the study’s authors note that since the economic recovery kicked in, total leisure time enjoyed by non-employed young men fell five hours per week between 2012-2015.
So if young men are not working and not playing (and not in school and not caring for children, say the authors), what are young men doing? Maybe looking for work? Or maybe they’re doing chores for their parents since the percentage of young men living with a close relative between 2000 and 2015 increased by 12 points.
That’s a nice thought, though it is not the answer, according to the study’s authors.  Not under consideration in the analysis: time spent on Facebook or web browsing. Also not included in the analysis, how many people are multi-tasking: playing on a computer game while riding the bus, for instance.
Even if men aren’t working, they don’t seem too upset about it. Surveys find that 21-30-year-old men were also 7 percentage points happier than men of their age in the early 2000s. Why? Well if you’re not working and you live in your parents’ basement, you probably have few cares. Voìla, instant satisfaction.
Pethokoukis notes that “gamers can still be workers,” and workers are still in demand even as the labor force participation rate for young men is decreasing. And that’s all the more reason to ask what is motivating younger workers to sit out the jobs. The answer is not conclusively video games.

Is There Any Room for Diversity of Thought on New England College Campuses?

The quintessential image of an austere college campus usually involves students walking across the quad with colorful leaves falling in the background. Their backpacks are heavy with books, or maybe the students are carrying a particularly thick text as they try waving their hands, engaged in heated discussion, moving as if floating on a cloud of intellectual stimulation.

Nowhere else is this image best envisioned than on the Northeast campuses of New England, the Dartmouths, Harvards, and Yales of higher education.

Yet, you’d be wrong to think these imagined discussions are steeped in diversity of thought. That’s not what’s happening on these campuses, according to the Heterodox Academy, which ranked 200 schools on how much viewpoint diversity one can expect to find. The organization, which collates several sources of information, including whether the school is committed to the Chicago Principles of Free Expression, recent events on campus, and implementation of speech codes, is comprised of professors who have taken a pledge to support and respect diverse perspectives, particularly political perspectives, and to foster an environment where people feel free to speak their piece.

Samuel J. Abrams, a professor of politics and social science at Sarah Lawrence College and a member of the Heterodox Academy, says that the results are particularly troubling when it comes to the storied institutions of New England.

The ranking has revealed that New England is by far the worst region of the country, especially for liberal-arts colleges, when it comes to campuses that support and maintain viewpoint diversity. With Harvard, Yale, Brown, and Tufts on the university side and Williams, Wesleyan, Smith, Amherst, and Mount Holyoke on the liberal-arts college side, these schools reflect the politics of the region and were all at the bottom of the rankings in terms of viewpoint diversity. This could well be the first time that these esteemed institutions have found themselves at the bottom of national rankings that are so crucial to the very mission of higher education.

But schools in the Upper Midwest and along the West Coast didn’t fare well either. The schools of the South and Midwest were described as the “least closed” in terms of diversity of thought.

Abrams notes that it may be easy to dismiss the findings as imperfect or one-offs, but they are becoming part of a trend.

New England has long viewed its progressive and social-justice leanings as part of its historical fabric, and the ideological preferences of those teaching in its institutions certainly reflect that. …

Taken together, these studies should give pause to New Englanders and anyone else interested in education, civic life, and questions of innovation and social progress. Students — current, future, and former — along with parents, trustees, and those in the community, should demand that institutions of higher education recommit themselves to the free exchange of a multiplicity of ideas. Viewpoint diversity is what drives progress on countless fronts, and it can help forestall the almost weekly nationwide blowups over speech and ideas.

This trend may get worse or better in the near future — that will depend on leadership at these colleges, leadership that goes all the way to the top. While Charles Murray recently lamented the problem of Middlebury’s president dismissing a riot that resulted in a professor being injured and free speech driven off campus, some school presidents are starting to see the downside of a lack of intellectual diversity.

Whether these schools help students learn to think critically, accept dissent, and function constructively when challenged will determine whether generations to come protect and preserve principles held dear by the nation’s Founding Fathers and which make American exceptionalism the envy of the world.

Patient-Based Health Care … on Facebook?

Bertrand Might has a rare genetic disorder that his family confirmed in 2012 after almost four years of searching for an explanation. Bertrand was the first person ever documented with his disease, called NGLY1 deficiency.

When his family finally discovered what Bertrand was facing, they at least had an answer. But then they faced another problem — finding others coping with the same ordeal.

It’s a common problem for people with unusual illnesses. Because some diseases are so rare, when a family finally gets a diagnosis, they want to compare notes with others to learn tricks and tips for managing their situation. Unfortunately, in such cases, these others are hard to locate. Medical data networks are hard to access and usually don’t have much information in them.

Matt Might, Bertrand’s dad, had a background in tech, and was able to juice a blog post to get picked up in search engines. The post went viral and Might got a lot of news coverage about the problem his son was facing. He has since found 15 people in the world with the same disease Bertrand has.

But not everyone has that success, even when Google and other sites are trying to harness their technological power to make medical data easier to access and control. But for all their efforts, David Shaywitz, Director of Strategic and Commercial Planning at Theravance, a publicly-held drug development company in South San Francisco, says Facebook may already be the best-positioned platform to support patient-centered health care that so many people dream about as the future of medicine.

Facebook is where patients with rare conditions, and their families, often go to connect with others in similar situations – typically via private groups. Apparently, these can be extremely specific – the example the panelist cited was childhood epilepsy due to one or another individual genetic mutation. Families reportedly self-organize into private groups based on the specific mutation, and share experiences and learnings. …

The irony, of course, is that because of its features and popularity, Facebook has organically emerged as arguably the most attractive platform for patient groups to organize – despite the far more deliberate efforts of other companies and organizations that offer platforms aimed at bringing patients together. …

Now, everyone reading this post is probably familiar with Facebook. It’s quirky. It can manipulate what you see and don’t see, whether you can share your opinion or have your opinion banned. It tries to influence what viewpoints should be supported and which should be ignored. And it really only provides an illusion of privacy when, in fact, one false setting and you’ve gone “public” or worse, “live.”

But then again, isn’t publicity what people in the Mights’ situation are looking for? And doesn’t Facebook have a whole lot of people looking for other people to “friend”? Facebook’s influence is unparalleled.

Facebook, at its core, is about cultivating relationships — in marked distinction to the transactional core of Google (search) and Amazon (deliver).  The core mission of Facebook is to connect people – and to help good things emerge from these connections. What better forum than Facebook to bring patients together — and what better platform for health?”

As Shaywitz notes, Facebook has already seen success in the health care arena, most notably allowing people to list their organ donor status, “an initiative which produced an immediate lift in organ donor registrations.”

Furthermore, as a platform to serve patients, Facebook already has the framework that other organizations are trying to build or replicate. Might told Shaywitz that Facebook could do a lot more, like create an opt-in “find patients like me” service. Shaywitz suggests other applications, like “user-friendly medical data import, sharing, visualization, and analysis.”

Ultimately, however, Facebook already has harnessed what patient-based health care is all about.

What many technologists fail to appreciate about health care is the importance and value of relationships, of human connection, of community. At its best and most foundational, medicine is about relationships, not transactions. Most of medicine, health, and wellness isn’t about showing up with a discrete question and leaving with a discrete answer. Our experience of illness and disease is so much more complex and nuanced, individualized and personal, a process of understanding that unfolds over time. The best physicians and care providers recognize this, and appreciate the importance of listening, and the value of longitudinal connection.

Do you think Facebook can appropriately manage health care databases and connections? Leave your comment.

A Tax Fix That Helps Single Adults More Than Raising the Minimum Wage

Last week, a study released by the University of Washington on the impact of Seattle’s decision to raise the minimum wage to $13 caused quite a stir. The study showed that the sudden increase in the minimum wage – from $11 to $13 – led to low-wage workers facing reduced hours, fewer jobs, and lower earnings. These effects were not seen after the first increase from $9.47 to $11 in 2015, but they did appear with the minimum wage increase in 2016.

When the city first decided to implement a $15 per hour minimum wage (the $15 hourly wage took effect Jan. 1 of this year), supporters of raising the minimum wage argued that it would allow lower-income employees to earn more money. Opponents warned that it would cause people to lose their jobs.

Some opponents of raising the minimum wage say other methods for helping low-income workers would be more effective while not harming employment rates. One such idea is a tax credit that would be given to low-income workers in direct proportion to how much they earn on their own.

The idea is that if you work, you can benefit up to a certain salary by getting supplemental income. The program is called the Earned Income Tax Credit (EITC).

How useful is the EITC? According to MDRC, a research group that studies the impact of social policy, the EITC has three major advantages:

The EITC encourages and rewards work. The EITC supplements each dollar that a low-wage worker earns up to a certain limit, providing incentives for the unemployed and welfare recipients to work and for low-wage workers to work more hours. A strong body of evidence demonstrates that work-based earnings supplements such as the EITC boost employment and earnings while increasing work effort.

The EITC reduces poverty. In 2015, the EITC lifted about 6.5 million people out of poverty, including about 3.3 million children. The number of poor children would have been more than one-quarter higher without the EITC. The credit reduced the severity of poverty for another 21.2 million people, including 7.7 million children. Workers in cities, small towns, and rural areas all benefit from the EITC.

EITC payments support important investments by families. Research indicates that families use the EITC to pay for necessities, repair homes, maintain vehicles that are needed to commute to work, and in some cases, obtain additional education or training to boost their employment prospects and earning power.

What the EITC doesn’t do is help people who don’t have kids or who have kids but don’t have custody of them. That’s why MDRC conducted a study in New York City and Atlanta to see the impact of extending the EITC to single adults.

Why is it important to help single or childless workers? Well, because when wages and employment rates fall, low-skilled, low-income workers get hurt the most, and that particular segment of the workforce includes a lot of single people! And there’s another tidbit to consider: Many of these adults in fact do have children but are not the custodial parent. So even when they don’t have kids in their households, they are responsible for children.

The three-year MDRC study in New York concluded that when people received the extra boost to their income (which maxed out at $2,000 per year for three years), they not only increased their pay, but the number of people employed also increased.

An added bonus to the uptick in incomes is that a significant segment of people in the pilot program ended up paying more of their court-ordered child support payments!

Paycheck Plus recipients paid an average of $54 per month more in child support than individuals in the control group — a 39 percent increase.

An additional benefit found in the study is that when people used the EITC, they actually filed their taxes. Now, that may seem like a negative to some — paying taxes isn’t exactly high on the list of stuff to be happy about — but paying taxes is a civic responsibility, even a legal requirement. And saying that you pay taxes is actually a common way of arguing that you have a say in how this country is run. So, hurray for the humblebrag. Now stand up and be counted.

EITC isn’t the be-all answer to ending poverty, and the tax credit does suffer from high error rates, but as a means of pulling people off the couch, it is a good way to encourage and reward work, and work is a formidable tool for helping people get more than just money. It is a means for providing dignity and learning skills that enable workers to aim higher for themselves. That path starts with the first dollar earned.

Proud to Be an American This Independence Day?

Are America’s best days ahead? It’s a time-tested question asked for decades to gauge the nation’s mood, and the answers give clues on whether people are proud to be an American or whether they are “over” America’s grand experiment. Fortunately, the fundamental belief in the greatness of the nation is still strong.

As Independence Day 2017 approaches, Americans are feeling pretty good about the nation’s form if less so about its function.

According to a new report that looked at a series of polling questions repeatedly administered over many years, the American spirit is still trending strong. As recently as March, 75 percent of Americans told the Gallup polling company that they are “very” or “extremely” proud to be an American. Unfortunately, this number is down six points from the previous two years.

But other poll questions that looked at particular aspects of America showed good will toward the nation’s ideals and achievements. For instance, 84 percent told Gallup they are proud to live under the U.S. system of government. More than half of Americans in an AP/NORC poll said they are extremely or very proud of America’s Armed Forces, as well as achievements in science, technology, sports, history, arts, and literature.

As for the nation’s best days, 62 percent of registered voters told Fox News in May that America’s best days are ahead; 29 percent said they were behind us. That’s an increase from recession-era May 2009 when 57 percent thought our best days were ahead and 33 said they were behind, but slightly down from mid-2012.

As far as exceptionalism – the very profound idea that America is unlike any other nation because of its emphasis on life, liberty, and the pursuit of happiness — 81 percent told Gallup in 2016 that America is exceptional, and holds a responsibility to be a leader in the world.

But as Karlyn Bowman and Eleanor O’Neil, researchers on public opinion and its impact on U.S. policy, write, just because people are proud of their country doesn’t mean they are happy with how it’s being run.

Pollsters tend to focus on our problems, and they are real, of course. When you care deeply about your country, you want to shine a light on problems to fix them. …

It will come as no surprise to anyone that we are dissatisfied with performance these days. In recent months, in a question Gallup has asked since the 1930s about the most important problem facing the country, more people volunteered “poor leadership/dissatisfaction with government” (25 percent of respondents) than mentioned any other problem. In a 2017 AP/NORC survey, 53 percent said political polarization was extremely or very threatening to the American way of life. It ranked higher than all of the other things asked about including the nation’s political leaders, illegal immigration, economic inequality, the influence from foreign governments, and legal immigration.

Likewise, the notion of division is palpable, with 86 percent saying they believe America “is more politically divided than in the past, the highest response on this question that was first asked in 2004. Around six in ten feel Donald Trump is doing more to divide the country than unite it.”

So, if a majority of Americans feel divided and are not confident in the way the government is being run but they are still optimistic about whether problems can be fixed, can common ground can be found? How do we go back to functioning cohesively? Could it be a grand project like putting a man on the moon? Does change start with us? The big ideas are noteworthy topics to remember and celebrate on America’s birthday.

Happy Independence Day!

What’s your idea for bringing together those who are proud to be an American to getting them to work together to solve the country’s biggest challenges? Leave a comment or join the conversation on Facebook.

Lies, Damn Lies, and Data Lies: A Homeless Epidemic Among College Students?

Imagine lying on a friend’s couch in her studio apartment, using the light from your cell phone to study for your midterm exam in small-business entrepreneurship. It’s late but you’re just now getting around to hitting the books because you’ve been out all day preparing for a contest, the winner of which is going to take home a $1,000 scholarship, which you can use to enroll in more community college classes next semester.

You’re hungry because you couldn’t afford to eat today since shampoo, bus fare, and books, were more important purchases. The welfare benefits you receive each month just can’t cover all the costs. You’ve been living on your friend’s couch off and on because you keep fighting with your parents over chores and how late you can stay out now that you’ve turned the legal age at 18. Sometimes you stay at a friend’s, other times you find an abandoned building or a van to crash in.

Now imagine this happens to 46,000 students in one community college school district alone. Imagine nearly 150,000 community college-age kids in one district going hungry on a regular basis because they don’t have enough money for food.

Hard to believe, right? It is hard to believe. But that’s what a new study in the Los Angeles Community College School District found.

The advocacy group Wisconsin HOPE Lab, based in Washington, D.C., does research “aimed at improving equitable outcomes in postsecondary education.” The study it produced — and reported by The Los Angeles Times — was commissioned by the school district’s board of trustees after the county Board of Supervisors decided it would spend a newly approved sales tax on homeless services, and in particular, on homeless college students. The tax is alleged to be worth $3.55 billion over 10 years.

As LaCorte News reports, the sampling of students in the Los Angeles Community College School District wasn’t in fact scientific. The lab emailed thousands of students, asking them to fill out a survey online. Less than 5 percent responded. The authors acknowledged in their report that “the findings in this study are limited by low response rates and potentially non-random sampling. Students experiencing food and housing insecurity may have been more likely to respond to the survey.”

As the email-only news agency notes, “Polls created by organizations with a mission, unsurprisingly, nearly always end up with poll results supporting that mission. This was one.”

You can’t really blame the school district for wanting to get in on the cash windfall, it’s animal instinct for large organizations to do so. But data lies are easy to generate, and have become a major challenge for policymakers trying to figure how much money to raise and where to spend it.

The problem is so large that Congress has created a commission to study it. And Robert Doar, who served as both commissioner of social services for the state of New York and commissioner of the New York City Human Resources Administration, says the issue isn’t so much that the information doesn’t exist, but rather it’s not being shared “in any organized, comprehensive or effective way.”

This failure is especially true between the federal government and state and local agencies. The Feds are relying on Census Bureau data to determine the levels of welfare and food stamps each year, but the Census data are becoming “increasingly inaccurate” and not reflective of “the true condition of America’s low-income populations.”

Throughout my 20 years of working in the social services agencies of New York State and New York City, I was constantly aware that I had a clearer picture of what was going on in low-income households than what was being reported in the Census Bureau’s annual reports on the economic condition of Americans, including those living in my state. Using data systems common in every state, I could see who was receiving food stamps or other welfare benefits, in what neighborhoods, and in what types of families. I knew the education levels of recipients, their family structures, and employment statuses. In short, I knew how much assistance New Yorkers were receiving from various government programs.

Especially troubling was the fact that the Census Bureau was indicating greater economic distress than what my colleagues and I knew was really the case.

Doar says a new report shows why the quality of data is on the decline.

Households have simply become less inclined to respond to surveys; and when they do, they are less likely to answer certain questions and provide accurate information, particularly when they are being asked about receiving various forms of public assistance. …

(The study’s) coauthors revealed significant underreporting of receipt rates and the value of benefit received for most poverty-reducing programs. For example, surveys failed to capture almost half of the dollars given out by the Temporary Aid to Needy Families program.

He notes that underreporting makes the problem look much more severe than it is.

If the closer to the ground you get, the more reliable data become, then it would seem that sharing the data would be a helpful solution, but of course, in a world of regulatory paralysis, there needs to be some kind of “legal authority or established incentives for state agencies to share their datasets with the Census Bureau.”

If agencies did share their data, Doar predicts that “it would dramatically improve the ability of the Census Bureau to describe the real economic condition of Americans.”

Such a measure would allow for a much-needed correction in how we understand poverty and perceive government programs in the country, ultimately contributing to more targeted and more effective policy decisions. …

Such data sharing could have far reaching impact on federal policy by allowing us to know the actual value of the benefits we provide, and how effective these programs are in moving families toward self-sufficiency. There is no more important fact to know in the War on Poverty.

Read the full Doar article in Real Clear Policy.

Agree to Disagree in a Constructive Way

Seems likes it’s becoming increasingly more difficult in the current political climate to “agree to disagree.” But can we disagree in a way that’s not destructive? Can we at least try to not be downright contemptuous to those with opposing views?

That’s the question being discussed by economist Arthur Brooks, who says politicians, in particular, are creating the climate of contempt. And the damage is being hoisted upon the average American.

“We have leaders who are encouraging us as citizens to treat each other with contempt,” Brooks, president of the American Enterprise Institute, said during a recent Facebook Live discussion from the Aspen Ideas Festival, an annual event held by the Aspen Institute in Colorado. “That’s a really dangerous business, building power on the basis of contempt and division. …

“The most destructive way to disagree is to treat your interlocutor with contempt. We have to get out of that particular habit. We have to demand leaders aren’t going to do that,” he said.

Sociologists describe contempt as a phenomenon in which individuals hold the conviction that other people are utterly worthless. It’s more insidious than disagreement or even anger, Brooks says.

“Anger you get over … contempt you don’t. If I treat you as a worthless human you’re never going to forget that,” he said, citing the work of marriage counselor John Gottman, who can watch a couple on a video for five seconds without the sound on and predict with 94 percent accuracy whether they will stay together or divorce based on physical expressions of contempt.

Nationally, 86 percent of Americans say they believe the country is more politically divided than in the past, according to the Pew Research Center. That’s the highest percentage ever to give that response since the question was first asked in 2004. At the same time, A CBS poll said a majority are optimistic that Americans of different political views can come together and work out their differences.

Brooks said that Americans in general have long been able to hold political disagreements and still treat each other respectfully.

“We all love somebody who doesn’t agree with us politically,” he said.

The obsession with national politics not only is not what the Founding Fathers envisioned, but also is to blame for the cult-like partitioning of Americans into political tribes. Fortunately, many political leaders at the state and local level on both sides of the aisle are solving problems without the distraction of creating heroes and villains.

Brooks says it comes down to being able to “disagree better.”

“The positive change starts with us.”

Do you think that Brooks is correct, and can anything be done to improve the divide?

Watch the video to hear more of Brooks’ views on the political climate and free enterprise as well as how he went from a classical musician to a world-renowned economist and researcher on happiness.

Underappreciated: Veterans’ Contributions to America After Military Service

Do you know a veteran? If you don’t, you are not alone. Sixteen million-strong in the Greatest Generation, just about all Americans knew a veteran following World War II. They were perceived as the most honorable among us, and as a result they were revered and studied for their character traits.

That has changed, according to Gary Schmitt and Rebecca Burgess, director and program manager, respectively, of the Program on American Citizenship. The Greatest Generation is dying and the new generation of service members is a much smaller group than it used to be.

As a result, Americans don’t know a veteran anymore, not like back in the day. This unfamiliarity has led to a decline in appreciation of veterans’ contributions, and the repercussions are not good.

We now tend to view (veterans) in a bipolar way, either as heroes or victims. Around half of Americans who see a homeless man believe he’s a veteran, one study found — they’re wrong 90% of the time — yet they also rush to thank veterans for their service.

Americans, in other words, don’t understand veterans. This is partly due to the professionalization of the military. In 1973 the federal government ended conscription and established the all-volunteer force. As the population grew and the military drastically shrank, the military-civilian divide grew wider and became self-reinforcing. Today, the child of a career-military parent is six times as likely to make the military his career, while less than 1% of Americans serve. Veterans are often assumed not to be representative of America at large.

The distorted view of veterans is unfortunate, particularly because veterans’ contributions to our civic culture today are likely disproportionately higher than society’s as a whole. Limited data suggest that veterans are more inclined to participate in public service and civic life — even after they leave military service — than the general population.

Once again, they are carrying the weight of our liberty on their shoulders.

Shortly after World War II, University of Chicago sociologist Samuel Stouffer launched an entire field of study dedicated to the effect of military service on attitudes and behavior in civilian life. Repeating those studies, which documented the activities of returning veterans after World War II, in the modern era would still be very helpful, not because of their impact on the health care system or the discovery of appropriate treatments for PTSD, but because veterans demonstrate qualities many of us don’t embody.

With a 21st century steeped in war, it couldn’t hurt to know more about the latest generation of veterans.

It’s likely that veterans’ participation in civic life, and especially in politics and elected office, will improve the country similarly to how the World War II generation’s involvement did. There are signs that it already is. But this is something we should know, rather than speculate about, the next time we see a homeless individual or thank vets for their service.

Americans don’t grasp just how much veterans do for America, both inside and outside the service, but an instinctive understanding of veterans’ contributions explains why public opinion holds them in higher regard than other entities that enjoy public trust (read: Congress and the media, to name a couple).

So even as veterans humbly engage in public service — after already stepping up to participate in the all-volunteer armed forces — we as Americans can try to learn from their example.

Read more about veterans’ service.

The Role of Parents in K-12 Education

Two classmates grow up together from kindergarten. They sit next to each other in homeroom, have all the same classes with all the same teachers, and take the same state-required tests. One does well and one not so much. What accounts for the difference?

The answer depends on who is responsible for a child’s education. The role of parents in K-12 education was so large in the 1980s and 1990s that it hurt student outcomes. Now the reverse appears to be true.

Educator and researcher Rick Hess describes what schooling was like back in the day.

Back in the 1980s and 1990s, American education paid a lot of attention to the quality of parenting and far too little to the quality of teaching and schooling. It wasn’t unusual to hear educators declare that certain students were unteachable or that they couldn’t be blamed for not teaching kids who weren’t there to learn.

In the early 1990s, I was supervising student teachers for Harvard University’s Graduate School of Education and I’ll always recall one exchange that crystallized the old ethos for me. I was visiting an iconic Boston high school that had seen better days. The bell rang and the social studies class I was observing got started. In a room of 30 or 35 kids, there were maybe a dozen who were taking notes, participating, and paying attention. The rest were passing notes, staring out the window and generally tuning out. My student teacher tried all manner of teaching strategies, but none made much difference.

The class finally ended and the students shuffled out. The student teacher, his mentor teacher, and I sat down to talk. I asked the mentor, ‘So, how’d you think the class went?’

He said, ‘What really impressed me was how engaged the students were.’

I wondered if he was kidding. He didn’t seem to be. I said, ‘Here’s the thing. To me, it looked like maybe 10 students were really involved. Did I miss something?’

What he said next has always stuck with me: ‘No, that’s about right. But he had all of the students who were here to learn. The others, the knuckleheads, well, you just want to keep them in line.’

Times have changed much to the better since the ’90s, but don’t confuse Hess’ recollection to mean that he believes educators alone are responsible for today’s student performance outcomes.

In fact, the push to ensure teachers are responsible for educating ALL students has swung the pendulum to the opposite problem. Parents are now on the back burner, and some are even conditioned to prefer it that way. In some districts “parental responsibility” dare not be uttered for fear that parents will slam teachers for trying to make excuses for poor educational outcomes.

But the role of parents in K-12 education needs to be raised to an even par with teachers. Parents must “do their part” to ensure their children learn. That means making sure that students are prepared when they arrive in the classroom. That means parents must insist their children show respect for their teachers, complete their homework before returning to school, and accept school-mandated discipline without students calling on their parents to argue their way out of a fairly meted punishment. It means parents themselves must be prepared as well for activities like parent-teacher meetings.

If not, then parents are left off the hook while educators take the brunt for poorly prepared students. Hess describes the balance that needs to be struck.

Think about how this works in medicine. When we say someone is a good doctor, we mean that they’re competent and responsible; we don’t mean that they perform miracles. If a doctor tells you to reduce your cholesterol and you keep eating steak, we don’t label the physician a ‘bad doctor.’ We expect the doctor to do her job, but we expect patients to do their part, too. This is the handshake between doctor and patient, and saying so isn’t seen as ‘blaming’ the patient.

When the patient is a child, parents come to play a crucial role. If a diabetic child ignores the doctor’s instructions on monitoring blood sugar, we don’t blame the child or say the doctor is failing. We expect parents to learn what’s required and make sure it gets done.

When it comes to the handshake between parents and educators, though, things have broken down. After all, teachers can’t make students do their homework, turn off their devices, or show up at school on time. Parents can.

Hess isn’t letting teachers return to the days of selective attention, and he acknowledges that raising healthy, mindful children is hard work. But education doesn’t stop at the schoolhouse door. Turning over students to the school system and then complaining that they aren’t learning hurts educators who are doing a good job against the odds.

It doesn’t take a village to raise a child, but it does take a parent-teacher partnership to educate one.

Read Hess’ article in U.S. News & World Report.

Using the Burger King Mentality to Destroy a Four-Year Investment

Can you hear the tune playing? “Hold the pickle, hold the lettuce. Special orders don’t upset us. All we ask is that you let us serve it your way. … Have it your way at Burger King.”

It’s an enduring commercial with a memorable tune. Forty-three years since its release, people still recall the jingle as one of the most effective pieces of advertising ever made, driving home exactly what Burger King was selling — convenience, made-to-order fast food, delivered to you just the way you like it, no questions, no lip, no delay.

It’s the Burger King mentality, and it’s great for ordering a drive-thru dinner. But the sentiment has crept into a lot of college campuses lately, and unless you’re in the student union food court, the Burger King mentality has no place at these institutions of higher learning. In fact, it can do real damage to a four-year investment in a college education.

Sadly, however, that’s how many students think of this four-year investment in a college education. Take this example:

Back in my early days of college, I complained often and loudly about any professor who had the temerity to include attendance as part of the course grade. ‘Not only am I capable of making my own decisions about going to class,’ I’d explain haughtily, ‘my tuition and fees pay his salary, so I should really get to choose how I’m graded.’ I eventually learned the inherent flaws of this opinion – thanks in no small part to several well-meaning professors more than happy to use ample amounts of that mandatory class time disabusing me of this and myriad other asinine notions.

Unfortunately, my consumer-based justification for why I ‘deserved’ to be given a bespoke educational experience – I pay your salary – is quite common on college and university campuses. Rather than consider postsecondary education an undertaking of self-improvement or intellectual exploration, many students approach college as more akin to ordering off a fast-food menu: I already know what I want, and since I’m paying, I expect it served to me just as I asked, immediately.

This is how Grant Addison, an education policy studies research assistant barely out of college, described his thinking. His mind has changed since conducting the research showing the downside of such a haughty outlook.

Put generally, evidence suggests that today’s students graduate without sufficient intellectual humility. Intellectual humility governs how a person views (one’s) own mental capabilities: This involves things like one’s understanding of the limitations of their knowledge, receptivity to new ideas and evidence and ability to consider new or conflicting information fairly and dispassionately. Critical-thinking and argumentative-reasoning skills draw from this well, as do many emotional qualities related to positive social interaction. Therefore, along with increased educative abilities, the intellectually humble are also better able to engage in civil discourse and interact with opposing perspectives.

Unfortunately, intellectual humility has gone out the door as universities shift “toward a customer-service paradigm.” The commodifying of higher education — in which university administrators focus on the bottom line rather than on higher learning — has created other problems as well.

The first is that schools are intolerant of intellectual diversity, which means little room for dissent or creative energy. You can’t churn out uniform degrees if everyone has his or her own opinion. And this means a crackdown on the very purpose of university education — intellectual rigor and truth-seeking.

Perhaps nowhere has the abject failure of higher education to teach students to think critically or act maturely and civilly been on greater display than with the issue of free speech and expression. Examples of campus-speech controversies are numerous and varied, yet together they illustrate of a kind of intellectual protectionism that has consumed a significant portion of higher education. Feeling entitled as consumers, petulant students increasingly demand safety from and punishment of any views deemed ‘offensive’ or simply unwanted – justifying censorship with such intellectually bankrupt canards as ‘speech is violence,’ or even perpetrating actual violence. Fearing the ire of the campus mob – or worse, that prospective students might not view their school as ‘supportive’ – feckless administrators turn a blind eye to their institutional strictures and basic psychology to join this regressive call-and-response.

If that weren’t bad enough, then there’s this: Uniformity is taking its toll not only on the ability of students to think critically or to engage in intellectual disputes, but also it is damaging their ability to function in the workforce.

Just this week, The Wall Street Journal unearthed more data highlighting the failure of colleges and universities to improve students’ critical-thinking skills. This analysis builds on earlier work by Rich Arum and Josipa Roska concerning undergraduates’ abysmal results on the Collegiate Learning Assessment Plus, a little-known test that measures students’ critical-thinking, analytical-reasoning and problem-solving abilities. According to test results from dozens of public colleges and universities between 2013 and 2016, the Journal found, “At more than half of schools, at least a third of seniors were unable to make a cohesive argument, assess the quality of evidence in a document or interpret data in a table.” Even at some of the most prestigious flagship universities, “the average graduate shows little or no improvement in critical thinking over four years.’

These findings come on the heels of last Friday’s lackluster May jobs report, which detailed a continued deceleration of the job-growth rate. Analysts believe this signals that businesses are struggling to find qualified candidates to hire – which is consistent with reams of survey data gathered from employers who lament that newly hired college graduates aren’t prepared for the workforce. When asked which traits are lacking, most employers cite either critical-reasoning skills or interpersonal or people skills as their primary complaints.

What can be done? Well, there’s always the money angle, and removing the wrinkle of unfettered tuition loans in the supply-demand formula. Then there’s the option to skip the universities and choose alternative educational methods to hone valuable skills for the workplace. Lastly, administrators could return to doing their jobs.

As social scientist Charles Murray points out after a recent protest-turned-assault at Middlebury College, the president of the school could have used her authority to hold the offending students accountable. She did not. It would have been a good place to start, yet inaction has become the all-too-common response of late.

Murray quotes social psychologist Jonathan Haidt’s description of Aristotle’s concept of telos as an aspiration these administrators may want to revive. “A university must have one and only one highest and inviolable good,” in this case, truth.

Murray continues:

The competing agenda of social justice is incompatible with truth. In their personal lives, students, faculty, and administrators are free to pursue social justice as they define it. But the university cannot take sides. The end of the university, its very reason for being, is to enable the unending, incremental, and disputatious search for truth. A university must be a safe place for intellectual freedom, else it has failed in its purpose.”

To wit: if you use a Burger King mentality, don’t expect to get a prime rib-quality education.

Why Wouldn’t the White House Promote Apprentices?

It seems obvious that the role of the apprentice is something President Trump appreciates so it’s a wonder why the question needs to be asked: why wouldn’t apprenticeships be a top priority in Washington?

Well, they were this week. In the midst of several news cycles that shed lots of light but little heat, you probably didn’t realize that this week was “workforce development week.” And lest you think this is reality TV, the administration made a big push on apprenticeships during cabinet meetings and talks with state leaders.

Indeed, the president this week called for 5 million new apprenticeships over the next five years, and in a rare case for Washington, he has some bipartisan backing to pursue the goal.

In the era of four-year liberal arts degrees, apprenticeships sound like something anachronistic, a leftover from the past, like colonial-era horse-shoeing and blacksmithing. In actuality, it’s a great opportunity for less-skilled workers, or workers with outdated skills, to get the training — and confidence — they need while getting paid to do the work.

Apprenticeships typically take the form of an employer and some type of education provider teaming up to offer hands-on training to prospective workers. Most apprenticeships are government certified. Importantly, apprenticeships are paid (unlike the typical internship), making them attractive to older workers who can’t go without an income and younger workers hoping to avoid borrowing for further education.

As labor economist Andy Smarick explains, apprenticeships are indeed paid, the question is by whom.

So what are possible downsides of apprenticeships? One is cost; there are nontrivial expenses associated with educating someone for a job. Obviously, employers will be wary in that investment if those trained end up taking those skills elsewhere. One question relates to education providers; who should deliver training — high schools, community colleges, unions, nonprofits, for-profits, employers? And assuming the government provides funding, how should providers be held accountable?

Of course, in Washington, success always comes down to money. And this case is no exception. The federal budget currently allocates $90 million a year to cover the cost of “regulating” apprenticeship programs. CNBC reported this week that the president doesn’t want to raise that budget by more than $5 million per year, leaving some news editorials to ponder whether the federal kitty has enough cash to keep the program purring.

Really, the challenge for Washington isn’t necessarily whether there’s enough money to grow the program, but whether it’s being used well. Politico reported that one senior administration official said, “The problem is not money … the problem is (training programs) haven’t been set up in an effective and accountable way.”

So what are the expenses being added up in Washington? They include partnerships between employers and higher education, which would mean dealing with accreditation and student aid. Or, the money could be spent on reorganizing existing federal workforce programs, which have been overlapping and wasteful.

Aside from the financial question, Smarick notes that one of the major concerns around apprenticeships is that young students coming out of high school and trained for a specific job eventually fall behind later in life because their skills “become outdated, the industry weakens, or the jobs get replaced.”

On the other hand, the U.S. Department of Labor found that “nine in 10 Americans who complete apprentice training land a job, and their average starting salary is $60,000 a year.” That’s certainly a step forward from the current situation, in which young people with high-school educations alone are roundly unprepared to enter the workforce, and generally end up in less-skilled, lower-wage jobs with less security.

Here’s a thought: No one is suggesting that high school educations be replaced with vo-tech, but if the U.S. were to go down the European model of tracking kids toward their skills aptitude, then perhaps industries as a whole could provide the ongoing job training that lawmakers so frequently laud, but rarely enable. Appropriate structuring of federal budget expenditures would not only provide enough money to fund apprenticeship accreditation, but could put programming on a path toward more accurately targeting workers for updated mid-life skills training.

Seems like that’s a program that would result in more workers hearing, “You’re hired!”

Read more from Smarick on President Trump and the basics of apprenticeships.

The Success Sequence: Why Education, a Job, Marriage, Then Kids Is the Working Order

Ah, millennials. In some ways, they’re very traditional, suggesting that women should stay at home to raise their kids. In other ways, they are very Bohemian, doing as they please when the mood hits. But it turns out, the old-fashioned “success sequence” — a (high school or higher) degree, job, marriage, then children, in that order — is still the winning combination for securing financial well-being, even for this late-day-and-age group.

The term “success sequence” isn’t new. It was coined in the last decade by researchers looking for policy ideas that could help break the cycle of poverty. Of course, it was criticized for pointing out that the cycle of poverty is more likely to be perpetuated for kids born into poorly educated households without two parents and few economic opportunities. It has become rude to point this out even though that’s the problem the research is trying to solve.

But facts are facts, as it were, and a new study by W. Bradford Wilcox, a professor of sociology at the University of Virginia, and Wendy Wang, of the Institute for Family Studies, found that the success sequence holds up as a guidepost for today’s Millennials as it did for Baby Boomers, even after adjusting for a wide range of variables like childhood family income and education, employment status, race/ethnicity, sex, and respondents’ scores on the Armed Forces Qualifying Test (AFQT), which measures intelligence and knowledge of a range of subjects.

The study found that “diverging paths into adulthood” taken by 28- to 34-year-olds — the eldest of the Millennial age group — produce very different economic outcomes.

Among the findings:

  • Millennials who follow the “success sequence” almost always avoid poverty, with 97 percent of Millennials who married first not being poor by age 28, compared to 72 percent who had children first.
  • 71 percent of Millennials from lower-income families who put marriage before children made it into the middle class or higher when they reached adulthood. Conversely, 41 percent of Millennials from lower-income families who put children first made it into the middle class or higher when they became adults.
  • Among black young adults, those who married before having children are almost twice as likely to be in the middle- or upper-income groups (76 percent) than those who had a baby first (39 percent).

success sequence statistics

Since 55 percent of 28- to 34-year-old millennial parents had their first child before marriage, the economic and family impacts will be felt for decades.

Millennials are more likely than previous generations to delay marriage and parenthood, but that doesn’t mean that they have to forego the order of education, work, and marriage. Indeed, there’s a reason the success sequence works.

Why might these three factors be so important for young adults today? Education confers knowledge, skills, access to social networks, and credentials that give today’s young adults a leg up in the labor force. Sustained full-time employment provides not only a basic floor for household income but, in many cases, opportunities for promotions that further boost income. Stable marriage seems to foster economies of scale, income pooling, and greater work effort from men, and to protect adults from the costs of multiple partner fertility and family instability.

Moreover, the sequencing of these factors is important insofar as young men and women are more likely to earn a decent income if they have at least acquired a high school education, and young marrieds are more likely to stay together if they have a modicum of education and a steady income. So, it’s not just that education, work, and marriage independently seem to matter, but the sequencing of education, work, and marriage may also increase the odds of financial success for today’s young adults.

Wilcox and Wang point out that there’s no statistical model to perfectly predict a youth’s future success. Some who succeeded came from roots missing those steps. Others who lived in households that followed the sequence ended up in the bottom third of the income scale. Lastly, there’s no conclusive evidence that the “sequence plays a causal or primary role in driving young adult success.”

The researchers also note that it’s easier to follow the success sequence when one is born into it, as opposed to young adults who came from poor neighborhoods, bad schools, and less educated households. It’s also easier to follow the success sequence when one comes from a cultural background that adopts these ideals and expectations rather than those groups who hold these values in lower regard.

But there’s no mistaking that the numbers overwhelmingly favor those who do follow the course, and that’s where both one’s personal “agency” and public policy come into play.

This report suggests that young adults from a range of backgrounds who followed the success sequence are markedly more likely to steer clear of poverty and realize the American Dream than young adults who did not follow the same steps.

Given the value of the success sequence, and the structural and cultural obstacles to realizing it faced by some young adults, policymakers, educators, civic leaders, and business leaders should take steps to make each component of the sequence more accessible. Any initiatives should be particularly targeted at younger adults from less advantaged backgrounds, who tend to have access to fewer of the structural and cultural resources that make the sequence readily attainable and appealing. The following three ideas are worth considering in any effort to strengthen the role that the success sequence plays in the lives of American young adults.

Read the full report here.

Is Vaping Safe? Yes. Then Why Try to Force It Out of Existence?

Smoking is bad for you, but is vaping safe?

What is vaping, you ask? Vaping is a substitute for cigarettes. Individuals suck the vapor out of an e-cigarette whose primary ingredient is a liquid made from vegetable glycerin or propylene glycol (PG), a synthetic compound used in massage oils, injectable Diazepam, hand sanitizers, and a bunch of other products.

The Food and Drug Administration (FDA) has “generally recognized as safe” pharmaceutical grade PG, which is what is used in vaping.

The vaporized liquid is thicker than smoke, though it isn’t smoke. There’s no tobacco in the liquid or any of the tar, carbon monoxides, or other dangerous toxins found in cigarettes. In fact, in many cases the liquid doesn’t even have nicotine, which is the addictive ingredient in cigarettes, and is often the draw in using e-cigarettes to get off smoking cigarettes. Vaping usually smells good because the liquid is infused with fruit, mint, or other flavorings.

Vaping has risks, but it’s way safer than cigarettes — like 95 percent safer!

So why has the FDA been trying to treat e-cigs like cigarettes? Usually, you have to follow the money. In this case, there’s the added bonus of following the moralists who equate vaping to tobacco and think that smoking is evil, no matter the product. It doesn’t hurt the moralist argument that the cigarette companies are now getting in on vaping as a recovery point for the dying tobacco industry.

Drug and addiction specialist Sally Satel explained what the FDA is doing.

In the spring of 2016, the FDA issued a “deeming rule” bringing e-cigarette devices and associated nicotine liquids under the jurisdiction of the Tobacco Control Act and requiring each product to be authorized by FDA.

It was clear from the outset that the cost of filing an application for approval would be excessive. FDA itself estimates application costs of between $286,000 and $2.6 million for devices and between $182,000 and $2.0 million for liquids – and there are tens of thousands of devices and liquids.

The FDA could flip its position and keep the industry alive while it proves itself, if its new commissioner, Scott Gottlieb, a former colleague of Satel’s, would delay regulatory rules that require the vaping industry to undergo the “unrealistic and unnecessary demands” that will put 90 percent of the industry out of business.

Since vaping is safe, but the regulatory regime is harsh, the threat of vapes being taken off the shelf is real. And that would undermine the vaping industry’s huge successes in getting people off cigarettes. Product standards are one means to regulate, but driving people back to using a deadly product seems counterproductive to the FDA’s stated goals.

Learn more about vaping from Satel.

Deaths of Despair: Opioid Abuse Devastates America. There Is a Solution

Low-income poorly educated whites between the ages of 45-54 are dying too soon. Unlike every other age, ethnic and racial, education, and income group, this group’s longevity is decreasing. Why? Opioid abuse.

That’s right, prescription painkillers, heroin, Fentanyl, and other opiate derivatives killed more than 33,000 Americans in 2015. That’s about four times the number of opioid-related drug overdose deaths than in 1999.

Nearly half of those overdoses come from prescription painkillers. But the number of prescriptions written for opioids has been on the decline since 2011. That may explain the rise of heroin and Fentanyl, as substitutes for legal opioids. But the turn toward heroin and Fentanyl now exceeds painkillers for the number of deaths each year.

These drugs are extremely potent. Fentanyl, which was created to relieve pain in end-of-life cases, is about 50 times more potent than heroin, but people can survive it because they build up a tolerance.

The related costs associated with this national epidemic total about $77 billion.

That seems shockingly high, but consider some of the tentacles of the epidemic. The foster care system is overwhelmed. West Virginia, which has the highest overdose death rate in America, has run out of funding for funeral burial benefits. Ohio has started building portable morgues because coroners’ offices are full. The state of Arizona recently concluded that on average, more than two Arizonans die every day in 2016 due to opioid-related overdoses.

To put it bluntly, the United States has a killer problem on its hands.

Christopher Caldwell, a senior editor at The Weekly Standard and author of an essay entitled, “American Carnage: The New Landscape of Opioid Addiction,” recently spoke at a conference about the massive growth in opioid-related deaths. It’s a problem that began long ago.

The specific problem of opioids, I think, has to do with the confluence of three things in the 19th century: In the start of the 19th century, scientists were able to isolate morphine, the chemical in opium poppies. In the 1850s, we invented the hypodermic needle, and in the 1860s, we fought the bloodiest war in the history of the planet, and a lot of people came home with what we now call chronic pain, and the ability of, the uses of this drug were just infinite.

It was over-prescribed. You know what happened, or you can predict what happened. A lot of mothers and teachers, and like, pillars of the community, got addicted and died.

It wasn’t until soon before the first World War that the first drug laws were passed. Drugs became taboo, but after Vietnam, drug use started rising again, and with that, so did drug deaths. The use of crack in the 1980s began elevating the death rate. But the spike in recent years is a whole different animal.

So can something be done? Well, resources seem to be moving in the right direction, and in one of those rare good news stories, federal money is being directed toward actual solutions.

For instance, drug courts have expanded access to medication-assisted treatment (like methadone), and residential treatment programs, as opposed to jail, are helping addicts recover, not languish in prisons.

Harold Pollack, a professor in the School of Social Service Administration at the University of Chicago and a contributing researcher to the National Drug Abuse Treatment Systems Survey, which tracks drug addiction and substance abuse treatment programs nationwide, says there is also some movement among lawmakers “who are looking at a map of the nation, and seeing the problem is everywhere.”

“Antiquated behavioral health systems” are being given new life with federal funding. Ironically, the source of these solutions are being funded in part by one of the most controversial assistance programs out there – Medicaid.

“Medicaid is kind of the ball game on the service side. It’s so much more important than the (21st Century) Cures Act or anything else that people are going to talk about,” Pollack said.

Pollack said that as lawmakers figure out how to replace the Affordable Care Act, one of the issues that isn’t on the chopping block is mental health parity in health care, which includes addressing the symptoms that lead to drug addiction.

What’s striking is ACA-Medicaid expansion is kind of the quiet model for successful bipartisan health policy. Nobody really wants to talk about it, but that’s what is happening on the ground. When you call up someone in a random state … the conversation is about the work, it’s not about the politics.

And in fact, when we ask people you know there’s just been an election, does that change anything, the most common answer we here is, ‘We’ve been told from our governor just do the work, don’t pay attention to what’s happening in Washington, just keep doing. And I actually find that very encouraging. Democrats and Republicans around the country are governing and they’re really trying hard to deal with this because they see this map, and they don’t want people to die.

Pollack notes that Medicaid expansion has been good and bad, and when it comes to addressing the drug crisis, and policymakers “know less than we should about what’s happening out there.” Fortunately, he said, the problem is finally being taken seriously, though it’s unfortunate the conditions that had to arise before it did.

The crack epidemic, the HIV  work, when the drug problem was much more black and brown in its public conception than it is now, that’s a welcome change. I must say I feel a certain sense of sadness at seeing the difference in public reaction but it’s a good thing that people are responding with empathy and compassion.

Watch more about the opioid epidemic.

Farm Subsidies: Not Your Father’s Cropshares

Imagine this scenario: A massive disaster hits and America’s food sources are wiped out. Miles of crops no longer exist. The cost of food skyrockets. America’s farmers are devastated. The land is destroyed, the farm equipment turned to trash, homes and livelihoods are ruined.

The government mobilizes into action. How? By paying the farmers for their losses.

Could it happen? Well, weather disasters do occur, and drought and crop loss do impact farms. And lo and behold, the government does pay farmers for their losses. But is this assistance really necessary?

By one estimate, it would cost taxpayers about $6 billion a year to cover the losses from these disasters. More so, this level of loss doesn’t have any real impact on America’s food supply. A bigger impact from farm production and prices comes from government manipulation of the market.

So why are farmers receiving $23 billion in federal farm subsidies — government payouts — every year, more than a third of which is going to pay for crop insurance?

Farmers are vital to America and the world. America alone provides about 30 percent of the global corn supply each year and 8 percent of its soybeans.

But the image of the struggling American family farm is now more myth than fact. Data from 2014 show that the median wealth for farm operator households is $802,000, or roughly 10 times more than the median for U.S. households overall ($81,200). Farm families are six times wealthier than the average American family.

Conversely, 2 percent of farm households live below the poverty line. Compare that to the national average, where 43 million Americans, around 13.5 percent, live in poverty (if you don’t count government assistance!).

That’s not to say there aren’t a whole lot of family businesses operating farms. That’s merely to say that the vast majority of the nation’s crops comes from large farm productions.

So why is the taxpayer subsidizing farmers? For some perspective: Farmers in the top 1 percent income bracket on average collect $1.5 million in annual farm subsidy welfare checks. Seventy-nine percent of all farm subsidies are paid to the top 10 percent of the largest farm operations.

American taxpayers give farmers $8.5 billion a year to pay for insurance in case of crop losses. You think Obamacare is a a wealth redistribution program, consider this: the taxpayer effectively pays $2 to transfer $0.90 to the farmer and $1.10 to the insurance industry.

From Vincent Smith, an economics professor in the Department of Agricultural Economics and Economics at Montana State University:

The largest farm subsidy boondoggle through which the farm sector milks the federal taxpayer is the federal crop insurance program. Currently, under this program, taxpayers fund over 60 percent of all indemnities received by farmers. For every dollar the average farmer pays out in premiums, he or she gets back more than $2 in indemnity payments without making any contribution to the program’s administrative costs.

For farmers, crop insurance is an upside down Las Vegas gamble where the odds of winning are massively stacked in favor of the gambler, not the casino. The “casinos” in this case are the agricultural insurance companies, and they are not really losing any money because almost all crop insurance program losses are underwritten by taxpayers. …

There are no caps on subsidies in the federal crop insurance program; the bigger and richer the farm, the more lucrative the crop insurance program. Because the risks of crop revenue losses from poor crops or low crop prices are covered, farmers adopt more risky production and financial strategies. They win if the risky decisions pay off; the taxpayer foots the bill if they don’t. Farmers also have incentives to plant crops on lands that have poor soils, are environmentally fragile and that would never otherwise be used for crop production.

Smith notes that the current administration has proposed modest reforms to the 2018 farm bill, which comes up for renewal every five years. It calls for a 20 percent drop in the $23 billion spent every year on farm subsidies. Of that, about $2.8 billion would come from reducing insurance subsidies.

The Trump administration wants a cap of $40,000 per individual farm for government subsidies used to buy crop insurance premiums. These cuts would only affect farms with market sales for crops in excess of $750,000.

Currently the government pays an average of 62% of all premiums for crop insurance coverage, with no limits on how big an individual farm subsidy can be. In 2011, according to the nonpartisan Government Accountability Office, more than 20 farms received over $1 million in such subsidies, and most crop insurance subsidies flowed to very large corporate farms. The White House’s Office of Management and Budget estimates that the $40,000 cap would reduce annual government spending by $1.7 billion, or about 15% of current crop-insurance subsidies.

According to Smith, the reduction in subsidies to farming operations would be about 1 percent of farm revenue. At $400 billion in annual revenues on top of $23 billion in government largesse, that sum barely impacts the market.

Agriculture is only 1 percent of the overall U.S. economy, and it does involve risk, but does that explanation provide a justification to access the taxpayers’ pocketbook? And if there’s one place to start cutting, shouldn’t it be on payouts to a wealthy class of business owners?

At Risk of Losing Your Lease? A Legal Battle Isn’t the Answer

If you don’t pay your rent, can you still stay in your rental property? Or is that landlord going kick you to the curb? It’s a fear that low-income families face in difficult times. Rent courts are tough. The legal battle is often reliant on a sympathetic judge and a very narrow window to find the money to pay rent before the sheriff’s department comes to get your stuff.

Eviction is a major cause of stress for everyone involved. For renters, especially parents with kids, the thought of losing the roof over your head is enough to keep you up at night. For independent landlords, it’s a scramble to cover the mortgage when the income stream has dried up. For everyone, there is the experience of material hardship and worsening health.

So what if the city came in and decided to help renters out — by paying the legal fee associated with getting a lawyer to help the renter in court? It would probably keep more people in their homes, but is it the best solution?

Only 10 percent of tenants get a lawyer when they’re facing a rent court dispute. Landlords have higher representation. Having a lawyer would probably help renters, but is it the city’s job to pick a side in a contractual dispute?

In fact, cities (and the parties in the disupte) may benefit more from helping people to stay where they are, but a better way to ensure that people have homes may not be to feed the legal system. Rather, it’s to use that legal fund to offer emergency assistance to keep renters afloat during difficult times.

A proposed program backed by members of Washington, D.C.’s City Council would have the city pay for legal representation for tenants who are facing eviction. It’s not a federal issue, but a local one, and it matters because D.C. is extremely expensive, and it’s hard for people who live on the edge to get quality, safe housing.

But think about it. Paying for a lawyer isn’t quite the investment in housing proponents wish it to be. Homelessness researcher Kevin Corinth says the idea not only has a strong likelihood of backfiring, but also of creating worse consequences than the harm of eviction.

Yes, legal assistance would reduce the risk of tenants losing eviction battles in court. It would probably even reduce the number of people threatened with eviction in the first place if landlords think they will have a legal battle on their hands.

But here’s the problem. Making it more costly and difficult to evict tenants who do not pay their rent makes it more expensive for landlords to rent out apartments. That could end up increasing the cost of housing in a city where escalating rents are already straining the budgets of low income families.

But an even more insidious consequence is possible. Landlords could decide that it’s no longer worth renting out apartments to people they believe are at risk of missing rent payments. Spotty employment histories, criminal records, and past evictions could be red flags that disqualify people from housing altogether.

In other words, it would be harder for anyone with a blot on his or her record to find a home, and would effectively drive lower-income city dwellers out of the marketplace altogether.

Emergency assistance isn’t an add-on to housing vouchers, and it’s not a permanent handout. The best part is that it helps the people who need a hand up, preventing them from ending up on a family members’ couch or a homeless shelter while raising the cost of living for every other renter. It would keep people in their units without distorting the rental market.

Emergency assistance programs have been tried and succeeded in Chicago and New York, and D.C. would benefit from getting its own pilot program lined up.

Unscrupulous landlords need to be watched and stopped. But, as Corinth states, “an entitlement to legal assistance in eviction cases threatens the basic ideal that the city should provide opportunity to everyone.”

Facebook and Democracy: Social Media’s Coarsening Impact on the Public Square

Could Twitter diminish your tolerance for opposing ideas (as well as your productivity)? Is Facebook bad for democracy?

Facebook, Twitter, YouTube, Reddit, and other social media platforms are set up to show people content that they are already likely to agree with, which is fine when you are checking out puppy dogs and meal ideas. But when the content turns toward politics or life-changing policies, social media algorithms on Facebook and elsewhere leave people seeing only content they “like,” trapping them in a self-reinforcing bubble with little exposure to alternative ideas.

The result? People with different opinions are drifting further and further apart, removed from intellectual challenges and less likely to engage with political opponents. This drop in the need for intellectual rigor is making it more difficult to find solutions to problems that impact everyone.

Harvard Law Professor Cass Sunstein’s latest book, “#Republic: Divided Democracy in the Age of Social Media,” outlines the role of social networks in representative government, and warns that the division of viewpoints into hardened us vs. them groupings is real, growing, and becoming more difficult to overcome with time.

Speaking to political columnist Michael Barone recently, Sunstein said that the blinders narrowing our minds are harming the American creed.

Echo chambers and information cocoons are a real problem for democracy. It’s very important for people to step outside a kind of hall of mirrors which they can construct with the aid of Facebook or Twitter or Instagram, and encounter both topics that are unfamiliar and maybe not especially interesting to them, and certainly points of view that aren’t congenial and that may be disruptive to what they already think that is central to, let’s say, the American project.”

The average Facebook user gets about 20 percent of his or her news from Facebook, with younger people getting a higher percentage. Likewise, the data show that people on Twitter tend to follow people that agree with their points of view.

Sunstein says this phenomenon is no surprise. Visionaries like Bill Gates saw 20 years ago a new world in which people could get exactly what they want, effectively creating what Sunstein calls “The Daily Me,” a completely personalized online encounter in which everything on one’s computer or tablet reflects views that are preferential to the owner. That’s exactly where society headed.

Is there a danger in not turning the trend around, or not having people demonstrate a curiosity for what others outside their viewpoints think? And is the decision to look at like-minded ideas on the Internet any different than self-selecting pre-sorts of media that came before it, like the cable news channels or news magazines?

Yes and no, Sunstein says. Self-selection has been going on for ages, but its scale has never been so large and so reinforced. As a result, despite its massive reach, social media have basically made it harder to solve problems. When it comes to policies like immigration, infrastructure, education, or economic mobility, the positions have become so rigid, that “doing something about some of these issues would seem preposterous.”

Sunstein notes that human curiosity doesn’t keep everyone down. The counter-effect of social media is that people on each side of the debate pay close attention to what the opposition is saying so that they can monitor and challenge it.

Though Sunstein describes his own book as downbeat and not cheerful, he suggested a few prescriptions that could turn the tide for American society. For one, providers of information, whether they be news outlets or Facebook itself, can get out of the business of reinforcing the barriers.

Two ideas that would be on the list of proposals are, why not give Facebook users an Opposing Viewpoints button where they can just click and then their newsfeed is gonna show them stuff that they don’t agree with. Or why not give Facebook users a Serendipity button where they can just click and if they click, then they’re gonna get stuff that is just coming to them through an algorithm which provides people with a range of stuff. So if you’re someone who is just focused on one set of issues, you’re gonna get the “Wall Street Journal” and “New York Times” also.

And Facebook, to its credit, doesn’t wanna pick winners and losers, so they shouldn’t promote one particular newspaper, but they could have a random draw of things, maybe it could be geographical.

One other approach to get us back into a constructuve debate is to challenge Americans try to take a high road when they disagree in public online forum, and not merely insult their opponents, but nudge people to explain the positive aspects of the positions they support. Good luck with that, but courtesy used to be an American value.

Watch Barone’s interview of Sunstein below.

How Work Requirement in Food Stamp Program Helped Reduce Poverty in Maine

TPOH has long advocated maintaining a safety net for those truly in need, but also supporting work as a means to build value in one’s lives and in the lives of others. Work provides meaning and purpose, despite those who wish to argue otherwise.

So it’s refreshing to read a strong rebuttal to a shocking claim that suggests proposed changes to the food stamp program will force people to hunt squirrels for food. Turns out such hyperbole doesn’t stand up to the evidence.

The Washington Post reported in a story last week that a Navy veteran was forced to catch, skin, and eat squirrels cooked on a flame nearby the tent where he lived in Augusta, Maine after the state tightened its work requirements for recipients of the social safety net. The newspaper than suggested that President Trump’s federal budget proposal mimics the Maine plan, and could jeopardize poor people.

But political commentator Marc Thiessen, a former speech writer for President George W. Bush, cleared up the Post’s misconceptions.

First of all, under federal law, work requirements only apply to able-bodied adults without dependents (ABAWDs).  So if a person is truly disabled, he or she would not be subject to work requirements.

Second, the work requirements are not all that stringent. Able-bodied adults can received three months of food stamp (Supplemental Nutrition Assistance Program – SNAP) benefits in a 36-month period, after which they have three options for fulfilling their work requirement:

1. Work a paying job for at least 20 hours per week

2. Participate in a federal or state vocational training program for at least 20 hours per week.

3. Perform 6 hours of community service per week.

This means that in order to be forced to hunt squirrels for food, you’d have to refuse not only to work, but also to participate in work training, or to volunteer for the equivalent of just one hour per day. If you are able enough to hunt and skin squirrels, you’re probably able enough to meet those minimal requirements.

Thiessen then explained that the state helps those who are bound to the work requirement with resumé building, job interview training, support coaching, and even providing volunteer opportunities.

As a result, Maine’s food stamp roles plummeted by 86 percent while its able-bodied adults experienced an average 114 percent increase in income!

Forbes magazine reported that people who relied on the program saw their average benefits drop 13 percent because they ended up needing less assistance.  The work requirement ended up reducing the cost of the food stamp program by $30-$40 million annually.

As Thiessen explains:

In other words, work requirements in Maine have been a huge success.  Far from hunting varmints, most people have found work. And – here’s the important part – work is what most people on food stamps really want. …

Thiessen explained that a similar case occurred in New York City under Mayor Michael Bloomberg, and it was reported that while people in the program expressed that the EBT card is nice, they preferred a job. Implementing the work requirement took New York City from having one of the nation’s highest poverty rates to one of the lowest.

To claim that work requirements are somehow cruel is to deny individuals the opportunity to achieve something self-made, an outcome that satisfies an internal need for fulfillment, not just a need for a full belly.

Some oppose work requirements because they see them as a way to punish welfare recipients or deny them benefits. But work is not a punishment. Work is a blessing. And work requirements are a critical tool to help rescue our fellow Americans from the misery of idleness – so they can achieve meaning and happiness in their lives through the power of honest, productive work.

Reagan’s Legacy? ‘Privatization’ Is a Dirty Word

In the era of a billionaire president (namely Donald Trump), any discussion of privatization turns nasty, and it’s Ronald Reagan’s legacy that is getting beat up in the process.

Reagan was big on running the federal government more like a business, and proposed broad ideas to get the private sector to take over some of the jobs government was doing. These public-private partnerships helped pump the economy, and it seemed to make more sense for these jobs to be done by companies whose business it was to do this kind of work. In a 1986 message to Congress, Reagan wrote:

In most cases, it would be better for the government to get out of the business and stop competing with the private sector, and in this budget I propose that we begin that process. Examples of such ‘privatization’ initiatives in this budget include sale of the power marketing administrations and the naval petroleum reserves; and implementation of housing and education voucher programs.

During the Reagan era, privatization began on a broad level, and private-public partnerships were instituted in a variety of areas. Today, these arrangements vary from prison administration to school vouchers. As Gerard Robinson, the former commissioner of education for Florida and secretary of education for Virginia, explains:

Public-private partnerships remain an important aspect of doing business in America; private prisons are still part of our state and federal corrections landscape; 26 school voucher programs are operating in 15 states and the District of Columbia; and 21 tax credit programs are operating in 17 states.

But in the age of Trump, Robinson says, much of the talk about private companies, which earn billions providing services to the government, has turned toward an anti-capitalistic tendency: namely arguments like, if a company has a contract with the government, it shouldn’t be allowed to profit.

But is that even remotely realistic? For one, these types of relationships have in fact been functioning for more than 100 years, not without flaws but certainly more efficiently than government could do alone. Two, what would be the incentive for companies to do business if they can’t benefit from the service? They already are doing it more more cheaply than could be done by a parallel company created by government to perform the same function without benefit.

Three, as Robinson points out, it’s just more feasible for some government agencies to contract out some educational services while doing others in-house. He uses examples from public school arrangements, for instance, in the area of technology support. Let Apple and Microsoft handle student computer services, not the schools. Or how about student transportation?

According to a recent report from Bellwether, district-managed public school buses account for approximately two-thirds of the 480,000 buses that transport 25 million students in urban and rural school districts each year. Private companies such as First Student, Inc., which has a contract with 1,200 school districts and employs 57,000 people to drive 6 million students to school each day, are among for-profit service providers that compose the remaining one-third. Why do districts outsource transportation? According to the National School Transportation Association, ‘School bus contracting benefits schools and school districts nationwide. Outsourcing transportation redirects attention and financial resources back into the schools that were overburdened by the expense and administrative commitment of providing their own student transportation.’

Robinson lastly makes the case that some anti-privatization groups may not want to admit: public employees benefit from investing in the private sector. If you remove that profit margin, public employees lose out, both in terms of an upper salary limit and by not having profitable companies into which they invest their retirement savings.

According to an American Investment Council report regarding the investments of over 155 public pension funds in various equity markets, funds invested in private equity produce a median 10-year annualized return rate nearly 4 percent higher than those invested in public equity. For example, the Teacher Retirement System of Texas invested $16.41 billion in private equity, and came away with a 15.4 percent increase in their annualized 10-year return. The New York State Teachers’ Retirement System invested $8.26 billion in private equity, and garnered a 13.2 percent increase in their return. The point is that these teachers, and countless more, will be able to retire with some comfort based on the investment of their public pensions in the private equity market.

So having profitable companies that provide valuable services seems like a smart choice that works on both sides of the coin, complementing government services while also providing a revenue stream for government investments. Seems like a viable course of action, one currently threatened by anti-capitalistic forces.

What do you think?

Which Pays Better Wages? Government or Private Sector

The Congressional Budget Office, the federal government’s numbers cruncher, recently completed an analysis comparing salaries and benefits received by employees of federal and large private-sector employers, and concluded that all things being equal, the federal government pays better wages than the private sector.

On average, the federal government’s compensation package pays a 17 percent premium over the private sector.

The analysis, called “highly professional” and “state-of-the art” by former Social Security Administration Deputy Commissioner Andrew Biggs, is an attempt to do an apples-to-apples comparison by taking into account levels of education and experience.

All-in compensation per full-time equivalent federal employee in 2015 was about $123,000. Assuming a 17 percent federal pay premium, this implies that on average a similar private-sector employee would receive total pay and benefits of about $105,000, an annual difference of about $18,000.  …

When averaged over 2.1 million federal employees, the federal compensation premium adds up to real money. Total federal compensation last year was close to $260 billion. A 17 percent difference is about $38 billion per year, equal to what the federal government spends on energy and the environment and substantially exceeding federal spending on transportation.

The CBO report found that 91 percent of federal employees have an education ranging from high school graduate to master’s degree, and that these employees make more than those of equivalent educations at similar jobs in the private sector. The report found, however, that the 9 percent of the federal workforce that have doctoral level degrees make 18 percent less than those with equivalent degrees in the private sector.

Biggs says that the difference in the type of grades, alma maters, and fields of study have not been measured so there’s no way to know whether federal workers are more “middle of the road” students from average colleges compared to those Ivy Leaguers with top grades. He suggests that this lack of information may be where the weakness in the report lies and it could be a notable variable since “most private-sector employers could not attract and retain employees while paying 18 percent less than their competitors.”

Doubling back, however, Biggs then says that the federal pay premium could be hurting innovation because workers who choose to make more money in government than work in the private sector are squandering their potential creative energies.

As the CBO report shows, for less-educated workers federal pay is more than 50 percent higher than private-sector levels. This makes it almost impossible for an employer of less-educated workers to compete and, as a result, the best of that group — employees with the greatest drive, imagination, and leadership — may find themselves employed in government rather than the private sector, where they might make a larger impact on their communities. …

There are many highly-educated, highly skilled, highly-motivated Americans working for the federal government doing important jobs. But we shouldn’t miss the risk that generous federal pay could mean the founders of the next Google or Tesla find themselves working in a federal office building instead of creating the innovations that can change the world.

But perhaps the well-paid average government worker of a decent education isn’t missing his calling. A recently released study that tracked 81 high school valedictorians through their careers found that the best and the brightest often end up in great jobs but ones that lack creativity. The suggestion is that the early track toward professional success pushes these highly motivated students to avoid risk-taking. They do not pursue eminence in one particular field nor devote themselves to a single passion.

“They obey rules, work hard, and like learning, but they’re not the mold breakers. … They work best within the system and aren’t likely to change it.”

In other words, dropouts like Bill Gates, Steve Jobs, and Mark Zuckerberg are unlikely to be interested in government careers in the first place.

Ultimately, the federal government’s high pay does have side effects. It skews the pay scale and impacts the labor market, making it harder for companies to compete for bright employees. However, if the goal is to populate the federal government with good-quality workers, financial benefits are a solid offer to attract them.

Student Loan Defaults Are Huge, Do We Know Who’s Not Repaying Their Debt?

Outstanding debt from student loan totals $1.3 trillion. That’s a big number and it isn’t going down because the number of student loan defaults is massive and growing yearly.

The nation’s student loan industry is nearly as large as the federal government’s largest mortgage program through the Federal Housing Administration. The federal government is responsible for issuing 90 percent of all student loans given — nearly $100 billion in federal student loans are offered every year.

The variety of federal plans through the Direct Loan programs is large. Available to students on all kinds of degree treks, whether post-graduate or a short-term certificate, student loan repayment terms are frequently more generous than what a private lender would offer.

The repayment terms are also extremely negotiable. Some repayment terms are based on fixed payments spread over decades. Others allow repayment to increase according to borrowers’ earnings. Others are set at 10 percent of adjusted gross income (after an exemption of 150 percent of the federal poverty guidelines). Some unpaid balances are forgiven after 20 years, or half that if the borrower is working in a nonprofit or government job. Deferments and forbearances are permitted to enable individuals to suspend payments for years.

It’s great that so many people are looking to better educate themselves, and it’s amazing that so many payment plans are available, but 8 million people are in default on their federal student loans today. That’s nearly 40 percent of all borrowers who are in default, delinquent, or using the forbearance and deferment options. That’s right: 40 percent of all borrowers are not paying back their loans.

What to do about it? Well, deciding a plan of action has hit a bit of a speed bump. Sadly, the government doesn’t know why the delinquency rate is so high because it’s not collecting the information needed to find out.

Reports suggest that many of the borrowers who default never even make the first payment on their loans. But it is impossible to analyze the data to better understand this issue. Some statistics also imply that a large share of defaulted loans are held by borrowers who left school over a decade ago, but many borrowers also leave default quickly and return to good standing. The lack of data means we do not understand what explains those very different patterns, and how policymakers might tailor solutions to these two groups.

Public policy researcher Jason Delisle told Congress that being able to accurately collect the information is halfway to solving the problem of delinquent debt, and to formulating a policy to tackle it.

How to deal with the problem of unpaid student loans and the toll it takes on the federal budget, the economy overall, and an education system that may not be properly serving its students is critical. Delisle made several suggestions on how to get the right information. He urged Congress to take a closer look.

Far too much is at stake for lawmakers to be satisfied with the existing data. Taxpayers and students deserve better than policies developed through anecdotes and assumptions.

Rebuilding America: An Investment in Social Capital

With the advent of modern transportation, community certainly extends beyond the boundaries of one’s home, so it shouldn’t be a great surprise that the percentage of adults who say they spend a social evening with a neighbor at least several times a week fell to 19 percent in 2016 from 30 percent in 1974.

No longer is this country based on loving they neighbor, but perhaps neighborliness is a lost art in need of a renaissance.

That’s the gist of a new report just released by the Joint Economic Committee on Capitol Hill. “What We Do Together: The State of Associational Life in America,” is part of the Social Capital Project, run by Sen. Mike Lee of Utah.

Its stated purpose?

The Social Capital Project is a multi-year research effort that will investigate the evolving nature, quality, and importance of our associational life. ‘Associational life’ is our shorthand for the web of social relationships through which we pursue joint endeavors—namely, our families, our communities, our workplaces, and our religious congregations. These institutions are critical to forming our character and capacities, providing us with meaning and purpose, and for addressing the many challenges we face.

The goal of the project is to better understand why the health of our associational life feels so compromised, what consequences have followed from changes in the middle social layers of our society, why some communities have more robust civil society than others, and what can be done — or can stop being done — to improve the health of our social capital. Through a series of reports and hearings, it will study the state of the relationships that weave together the social fabric enabling our country — our laws, our institutions, our markets, and our democracy — to function so well in the first place.

The first report from the project is a bit dispiriting. While Americans are much more materially better off, the social fabric is frayed, fractured, and seemingly coming apart. At risk is pretty much the social norms that allow a middle class and the sustainability of a “free, prosperous, democratic, and pluralistic country.”

Some of the findings in the report reveal that social capital is dropping because Americans are spending less time socializing with neighbors, declining to vote, and lacking in trust of fellow Americans (from 46 percent in 1972, the report to 31 percent in 2016, according to the General Social Survey).

Political columnist Ramesh Ponnuru points out some exceptions raised in the report.

Rates of volunteering have increased. Some kinds of political engagement have also risen: The percentage of the population that reports having tried to influence someone else’s vote has gone up over the last few decades. The overall story, though, is one of fewer and weaker interpersonal connections among Americans.

Social scientists Charles Murray, who testified to the Joint Economic Committee this week, described the impact of a decline in social capital: fewer people are getting married and fewer men are working. He said that the government can try to find policies to encourage behavioral changes, but the declines are symptoms of a larger, more visceral problem.

If I had to pick one theme … it is the many ways in which people (behave) impulsively — throwing away real opportunities — and unrealistically — possessing great ambitions but oblivious to the steps required to get from point A to point B to point C to point D in life.

In other words, the desire for instant gratification has its consequences. Another problem he cited is a general self-destruction created by the squandering of an ample number of opportunities to get ahead.

The solution?

It comes down to the age-old problem of getting people, especially young people, not to do things that are attractive in the short term but disastrous in the long term and, conversely, to do things that aren’t fun right now but that will open up rewards later in life. The problem is not confined to any socioeconomic class. The mental disorder known as adolescence afflicts rich and poor alike. And adolescence can extend a long time after people have left their teens. The most common way that the fortunate among us manage to get our priorities straight — or at least not irretrievably screw them up — is by being cocooned in the institutions that are the primary resources for generating social capital: a family consisting of married parents and active membership in a faith tradition.

I didn’t choose my phrasing lightly. I am not implying that single parents are incapable of filling this function — millions of them are striving heroically to do so — nor that children cannot grow up successfully if they don’t go to church. With regard to families, I am making an empirical statement: As a matter of statistical tendencies, biological children of married parents do much better on a wide variety of important life outcomes than children growing up in any other family structure, even after controlling for income, parental education, and ethnicity. With regard to religion, I am making an assertion about a resource that can lead people, adolescents and adults alike, to do the right thing even when the enticements to do the wrong thing are strong: a belief that God commands them to do the right thing. I am also invoking religion as a community of faith … For its active members, a church is far more than a place that they to worship once a week. It is a form of community that socializes the children growing up in it in all sorts of informal ways, just as a family socializes children.

Murray said his ideas are not meant to generate policy recommendations, but more a warning.

I would argue that it is not a matter of ideology but empiricism to conclude that unless the traditional family and traditional communities of faith make a comeback, the declines in social capital that are already causing so much deterioration in our civic culture will continue and the problems will worsen. The solutions are unlikely to be political but cultural. We need a cultural Great Awakening akin to past religious Great Awakenings.

Will the social capital project be able to trigger a “Great Awakening”? Perhaps not, but a disconnect in society will most certainly cause bigger problems that will ultimately cause a larger breakdown that will rely on homegrown gumption to fix.

As Ponnuru explains, a return to the aspirational nature of social capital may require a “rediscovery of Tocqueville.”

Sentiments and ideas renew themselves, the heart is enlarged, and the human mind is developed only by the reciprocal action of men upon one another. … In order that men remain civilized or become so, the art of associating must be developed and perfected among them.”

Everybody Lies: Except in a Google Search

Don’t bother answering questions by the next pollster who calls to do a survey. You’re probably going to lie to him. Because “everybody lies.” And there’s no point in taking a survey if you’re going to lie. Besides, Google’s already got you on the truth meter.

That’s one of the main discussion points in the new book, “Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.”

The blurb on the book says, “By the end of an average day in the early 21st century, human beings searching the Internet will amass 8 trillion gigabytes (GB) of data.” Every day, 8 trillion GB. What does that even amount to? Who knows, but it’s a lot. The average computer has about 4 GB of memory. A flash memory card in a camera may store 16 GB. We’re talking 8 trillion GB – daily.

So what are people searching? Pretty much everything, according to “Everybody Lies” author Seth Stephens-Davidowitz.  And the data these searches reveal can be one useful tool for putting the human psyche under the microscope.

“People are honest on Google. They tell Google what they might not tell to anybody else. They’ll confess things to Google that they wouldn’t tell friends, family members, surveyors, or even themselves,” Stephens-Davidowitz said Tuesday in remarks about his book.

Take, for instance, some of the common confessional-style searches that Google gets: “I hate my boss,” “I’m happy,” “I’m sad,” or even “I’m drunk.”

Some of the searches can become rather morose and depressing. For instance, after the San Bernardino attack in 2015, in which 14 people were killed and another 22 seriously injured, top Google searches that soon followed included “Muslim terrorists” and “kill Muslims.” Stephens-Davidowitz says certainly it lacks context to try to guess what people were trying to express in the search, but it also provides guidance.

Here’s one way the data were used. Shortly after the attack, President Obama delivered a speech to try and calm fears about Muslims in America. But his grandiose sermonizing about opening America’s hearts backfired. Even during the speech, people got angrier. But at one point, Obama said that we have to remember that Muslim-Americans are our friends and neighbors, they are sports heroes, and members of the military who are willing to die to defend this country.

Immediately, while the speech was still being given, Google searches for “Muslim athletes” spiked. The increase was so notable that when Obama gave a speech a couple weeks later on the same topic, he skipped the lecturing and focused on the contributions of Muslim-Americans.

Stephens-Davidowitz argues that while Obama’s sermon didn’t tell anybody anything that they didn’t know, the line about sports heroes provoked curiosity, provided potentially new information, and redirected attention. This may not indicate that there’s a science to calming fears after a terror attack, but it does show the power of the data to change how people act and react.

Stephens-Davidowitz says part of the reason why data searches are more useful than old-fashioned survey questions is because people tend to lie in surveys to make them look good. It’s called social desirability bias. It happened during the elections of 2008.

During that time, most Americans surveyed said Obama’s being black didn’t matter. Yet during the election, there was a spike in racist term searches. And graphing that data revealed that racist term searches were geographically divided between East and West. While correlation is not causation, where the racist term searches spiked, Obama lost about 4 percentage points of the vote over the previous Democratic candidate (John Kerry) in Democratic strongholds. He also generated a 1-2 percentage point increase in the number of African-Americans who voted.

Map of Google searches of racist content

The book, “Everybody Lies,” isn’t entirely about politics. It talks about a variety of topics like the stock market, crime, sports, and of course, sex, a hugely commercial enterprise on the Internet. In one example about the truth of big data, Stephens-Davidowitz notes that American women said in recent polling that they had sex (hetero and homosexual sex) once a week and used condoms about 20 percent of the time. Extrapolating the numbers, that would mean about 1.6 billion condoms were used that year. But asking men the same question (about hetero and homosexual sex) resulted in just 1.1 billion condoms allegedly used that year.

So who’s telling the truth, men or women? Neither. According to sales reports, just 600 million condoms were sold during the year in question.

Stephens-Davidowitz conjectures that people have an incentive to tell the truth to Google in a search, more so than to a pollster asking a survey, because they need information. For instance, an increase in the search volume for voting places in an area in the weeks leading up to an election is more likely to reveal whether turnout is going to be high in that location than whether a pollster finds that 80 percent of the people say they will vote.

But is Internet search a digital truth serum? Is it the best way to get real answers? Yes and no.

It depends on how available other high quality data are. For instance, Google flu, which attempted to determine how sick the population was during flu season based on searches about symptoms, was not as accurate as flu modeling currently used by government agencies like the Centers for Disease Control and Prevention.

Furthermore, what people search doesn’t explain why people search. Likewise, Google doesn’t identify who’s searching so we don’t know if the search is a representative sample of the population. There’s no way of knowing what an absolute level of response would generate. For that, we need lots of different types of data.

But Internet searches may be useful in measuring the human psyche more so than in predicting futures. Big data can be helpful in looking at information that does not require very precise numbers. Predicting an election within 5 percentage points isn’t helpful. But it probably is not a big deal to be off by 10 percent when counting the number of condoms used in a year.

As for topics like child abuse, Stephens-Davidowitz says that he’s not actually sure how to use the data to help governments and protective agencies develop programs to identify and address abuse, but that it’s certainly information that would be helpful in filling a gap in reporting. And like any pollster worth his salt will tell you, being able to ask the right question is one vital way of getting to an accurate answer.

Watch the remarks by Stephens-Davidowitz.

The Always Entertaining State GDP Map Is Back

University of Michigan-Flint Economics Professor Mark Perry annually produces a very helpful visual tool: a state GDP map that compares how each U.S. state’s economy matches up to a corresponding country of equal output.

It’s a great way to see how enormous the United States’ GDP is compared to the rest of the world.

In short, U.S. GDP in 2016 was $18.6 trillion in total, which is 24.7 percent of the global gross domestic product, despite a population that is only 4.5 percent of the world’s total.

Some other interesting facts: If California, Texas, and New York were one country, it would rank third in the world, with $5.7 trillion in GDP. That would put it ahead of No. 3 Japan ($4.9 trillion) by almost $1 trillion.

Elsewhere, Pennsylvania’s GDP, $725 billion is larger than that of Saudi Arabia’s, with all its oil wealth. Florida, with a $926 billion GDP produced about the same as Indonesia, with $932 billion, even though Florida’s labor force is 8 percent of the size of Indonesia’s (127 million).

Perry explains that this feat demonstrates one of the greatest assets that America has — its people and their liberty to work.

Adjusted for the size of the workforce, there might not be any country in the world that produces as much output per worker as the U.S., thanks to the world-class productivity of the American workforce. The map above and the statistics summarized here help remind us of the enormity of the economic powerhouse we live and work in. So let’s not lose sight of how ridiculously large and powerful the U.S. economy is, and how much wealth, output and prosperity is being created every day in the largest economic engine ever in human history.

Click on the state GDP map to enlarge it.

State GDP Map

Read the original article here.

FCC Website Crash Doesn’t Free the Internet

Apparently, the guy with the HBO comedy show doesn’t think innovation is a good thing. So John Oliver, host of  “Last Week Tonight” decided that it’d be a good idea to encourage his fans to a website that would take users to a page to file comments to the Federal Communication Commission about its plans to roll back Obama-era rules on so-called “net neutrality.”

The FCC website crashed, and Oliver fans took credit for pushing so much traffic to the site that it couldn’t handle it. The FCC claimed its website had been hit by a cyberattack after Oliver’s segment.

In a true case of irony, it would appear that Oliver fans think crashing a website secures Internet freedom. In another apparent delusion, they also believe that net neutrality regulations are helpful in giving Internet access to more people. In fact, a rollback of the two-year-old rules return the Internet to a place where it has thrived for nearly 20 years.

When President Clinton broke down the barriers created by the telephone companies trying to dictate how the emerging digital economy should evolve, it was heralded as a breakthrough for competition. The Internet was born, and nothing has been the same since. The world shrank as billions of people became virtually connected.

Then, suddenly in 2015, the rules changed, and the Internet was treated like a utility to be regulated rather than an innovation to be nurtured. To hear Bret Swanson, president of Entropy Economics LLC, a strategic research firm specializing in technology, tell it, the oddly timed push toward net neutrality was a “cause in search of a purpose.”

Now, Ajit Pai, the head of the FCC, is calling to roll back the 2015 net neutrality rules. Swanson explains:

Pai’s approach will now do three things: (1) return broadband to its original classification as a Title I information service; (2) eliminate one of the 2015 order’s most mischievous policies, the ‘general conduct rule,’ under which the FCC gave itself nearly unlimited power to govern the entire digital economy; and (3) seek comment on the order’s so-called bright line rules on blocking, throttling, and paid prioritization.

What does this mean? Net neutrality proponents like former FCC Chairman Tom Wheeler, who ushered in the 2015 regulations, claim that rolling back Title II, which essentially designates the Internet as a utility, will cause a slowdown or blockage of Internet content to end users because it will permit service providers to price their services. Others say that it means people will only have access to what they’re willing to pay for.

But Swanson say Wheeler and other critics have it all wrong. In fact, he calls Wheeler’s grasp of the impact of Title II  “a near total misunderstanding of the technology, economics, and history of the Internet.”

Title II, with its price controls and endless permission slips, would have delayed the buildout of residential, enterprise, and mobile broadband networks. The general conduct rule and bright-line ban on prioritization may have blocked the emergence of important ‘paid priority’ technologies like content delivery networks (CDNs) and paid peering, which were essential for Web video, and prohibited industry partnerships, such as the Apple-AT&T hook-up that made the iPhone possible. Until Pai ended the investigation, Title II was already starting to chill an important new practice, known as free data, which allows content providers to subsidize the data consumption of consumers.

Indeed, a fascinating description by Babette Boliek, a Pepperdine University law school professor, explains just how onerous the “General Conduct Rule” in the Open Internet Order, (a.k.a. the “net neutrality” regulations) is.

It is a regulatory steam valve where the FCC gives itself permission to regulate anything it can’t think of now that it might think of (or be convinced of) later, which may ‘unreasonably’ interfere with the FCC’s ever expanding definition of ‘net neutrality.’ That includes anything that may ‘unreasonably’ disadvantage … well, anyone.

… Think of it this way. What if you sell lemonade by subscription? For a monthly fee, you send your subscriber two gallons of any variety of lemonade the consumer selects from a list of 50 varieties (all the lemonade in the world). Several other lemonade subscription services are vying for your customers so you come upon a creative competitive idea — in addition to the two gallons of lemonade the consumer selects from the list of 50, the consumer may also select an unlimited amount of lemonade from a subset of the 50. Great idea! More for the consumer at the same price! Since the unlimited amount will come from a subset of the 50, the consumer who craves variety will likely use her two gallon allocation to pick a new type of lemonade — one not from the subset that she can get free. It’s a win for the consumer, a win for the less popular lemonade producer, and a win for you, the lemonade subscription seller.

But what if someone argues that this ‘unreasonably’ disadvantages a lemonade producer? Is that really the most important takeaway from this lemonade example? Shouldn’t we focus on how happy the consumer is? How is ‘disadvantage’ even defined?

Boliek concludes that “FCC policies that focus on the costs borne by corporations and businesses almost to the exclusion of actual benefits for consumers are just bad policy.”

Had Title II standards been applied to the Internet in the 1990s, we might never have enjoyed “supercomputer smartphones, Netflix, GitHub, Google Maps, Kindle, Facebook, endless cloud services, and online everything,” which includes the future of digital innovation — connected cars, mobile health, and 5G wireless, for instance. Some people may say this is a good thing. But the imposition of net neutrality rules in the past two years appears to have slowed capital investment and smaller providers have been prevented from expanding their market share. As a result, consumers are the ones denied access and options.

So is this what John Oliver is protesting, a free market that benefits consumers and the economy? That seems like a fight without an argument.

Read more about the rollback of Obama-era Internet regulations.

Beyond the Military: Veterans in Public Office

The United States has always prided itself on the separation of the military from civilian service. It’s one of the foundational tenets of our republic, and a matter that George Washington took very seriously, as both the commander-in-chief of the Continental Army and the first U.S. president.

Indeed, Washington invoked the separation of civilian from military power to reassure New Yorkers that they would not become beholden to a military-led government.

“When we assumed the Soldier, we did not lay aside the Citizen,” Washington wrote to the New York Legislature in 1775, as the populace worried about replacing the monarchy with a standing army.

Ironically, Washington set the standard for Americans’ comfort with military veterans in public office.

Of the first 25 presidents, 21 had military experience, beginning with Washington. The high-water mark for the custom of veterans entering public office peaked in the 1970s, when veterans made up 72 percent of the House of Representatives and 78 percent of the Senate. In the last Congress, the percentage was down to 18 and 21 percent, respectively.

The decline in the number of veterans in federal office can partly be attributed to the fact that, for better or worse, politics has become a career (as has the military). And while the military remains a viable option for individuals from all walks of life to build a career, the barriers to entry of public office are rising, increasingly dependent on the amount of money, power, and name recognition one can accumulate, not to mention the advancing age of the average lawmaker.

In the 114th Congress, the average age of the U.S. House member (was) 57, and the average senator’s age (was) 61. When looking at the average age of those who were newly elected in the past election, the average new representative was 52.3 years old; a new senator was 61. Meanwhile, when examining the average length of service, the typical representative was at 8.8 years (4.4 terms); the senator was at 9.7 years (1.6 terms).

By comparison, the median age of post-9/11 veterans is 33, with nearly 60 percent younger than 34. The median age of all other pre-9/11 veterans is 66.18

Despite the barriers, military veterans offer a unique contribution to the Legislature from having served in the Armed Forces. They are cognizant and respectful of the set of responsibilities, as enumerated in the Constitution, that the military holds in U.S. society, and as veterans, they are held in especially high regard in the public eye, which can’t be said for Congress.

A first-of-its-kind study looks at the role of veterans in state legislatures as a precursor for higher office. While veterans make up on average 9 percent of the adult U.S. population, they are on average 14 percent of the state legislatures across the 50 states, even lower than in federal office.

Out of 7,383 state legislators, 1,039 have military experience.

Veterans currently holding office in their state legislatures represent every branch of the Armed Forces, including the Army, Army Reserves, Army National Guard, Marine Corps, Marine Corps Reserves, Air Force, Air Force Reserves, Air National Guard, and Coast Guard.

They include both the pre-AVF (all-volunteer force) and the AVF eras. Some have served in peacetime and some in war — and some have served in both. They hail from a wide variety of veteran-era cohorts, from World War II to our contemporary post-9/11 designation. They have been on active and reserve duty, deployed multiple times, seen combat, and been awarded Purple Hearts and even Bronze or Silver Star Medals.

The study notes that in state legislatures, veterans are 70 percent Republican and 30 percent Democratic, but that this is not necessarily true across state lines.

The veteran political party division in state legislatures does not necessarily reflect the majority-minority party division. Additionally, several states feature political parties other than the Republican and the Democratic Party, with a few veteran legislators falling in these other ranks—such as the one unaffiliated state legislator in Maine and the one Conservative Party New York state legislator.

Additionally, the data reveal that 40 percent of the total veteran population is located in the 16 states making
up the South; states and localities where the highest numbers of veterans currently live are not necessarily where veterans make up the highest percentage of the state population; and the largest state legislatures are not necessarily in the largest or most highly populated states.

The study goes into a good amount of detail to break down representation in legislative bodies by other data, but the overall purpose is to set a a baseline from which to begin charting the movement of veterans from state-level public office to national office in Congress. It also serves another important purpose:

Only by filling out that study and tracking future election cycles will we be able to understand the political engagement of veterans, gauge how public-service-minded veterans are in the AVF era, and determine to what extent, as many have speculated, the post-9/11 generation is the new ‘greatest generation,’ more committed to public service than their parents.

In doing so, the hope is also to highlight the public service commitment of military veterans in general. While a handful of organizations have begun to survey veterans’ civic attitudes and behaviors and publish positive data about veterans as civic assets, residual Vietnam-era narratives linger. These misrepresent, often in powerful ways, that the military veteran is often a broken human being in need of society’s pity, rather than a capable and strengthening element for it. This study is one attempt to counter such misrepresentation.

Click here to read the article on legislative members from the military.

 

What US News & World Report’s High School Rankings Missed

There’s a saying that if the only tool you have is a hammer, everything looks like a nail. Another, perhaps more humorous one, is the proverbial story about the drunk looking for his keys under the street lamp.

The meaning of the sayings are similar — if you only have one resource to identify and solve a problem, you’re never going to solve the actual problem that you may be facing.

Such is the problem with the U.S. News & World Report ranking of the best high schools in America, as identified by education researcher Nat Malkus.

For Malkus, USNWR does a decent job with the tools it has to measure the performance of more than 20,000 U.S. public high schools. The problem, however, is that it only uses one tool, over and over again, which doesn’t accurately measure outcomes in educating students.

Each year, U.S. News teams up with RTI International to run 20,000 public high schools through a four-step process to rank which are the best. In step one, they evaluate schools’ proficiency rates on state math and reading tests against statistical expectations given their student poverty rates. Passing schools move to step two, in which U.S. News assesses whether historically disadvantaged students performed better than the state average. In step three, U.S. News cuts all schools whose graduation rate is below 75 percent (somewhat odd, given that the national average is 83 percent). In step four, schools are ranked on a ‘College Readiness Index,’ which is based entirely on their success in Advanced Placement courses.

What makes a school ‘best’ in the U.S. News rating system? A school’s broader performance on state tests has to be moderately above average to clear the first three steps, but that left more than 29 percent of the schools moving on to step four this year. After that, it all comes down to AP passage rates. … No doubt, AP success is a high bar for high school students, and since the AP tests are the same nationwide, it provides a usable metric for academic excellence. But is it a good enough indicator to decide which high schools are best?

The answer is no. The reason U.S. News leans so heavily on AP is that the data are available. But that is like the proverbial drunk looking for his keys underneath the street lamp. The rankings promote the notion that the best high schools are the ones with the highest outcomes, and because AP success is the only outcome measure they have, they use it, even if the way the top schools generate those outcomes is dubious practice.

Several schools who outperformed the average in the USNWR study, specifically the BASIS charter schools in Arizona, push their students in the area that USNWR looks at — AP studies — so they will naturally look like they are turning out better results than schools that use other means of educating or getting students from A to Z, so to speak.

The problem with looking under the street lamp is that the rankings primarily gauge where students end up, not where they start from or how much they learn. The BASIS schools dominating the top ten push advanced academics hard and are transparent about the fact that the workload is not a fit for all students. Other schools in the top ten have GPA requirements for enrollment. It’s good that there are hard-charging schools for advanced students, but it’s irresponsible to ignore how selective they are. In focusing narrowly on AP outcomes, U.S. News leaves the impression that all schools have equivalent starting points when, in reality, it’s nearly impossible for non-selective schools to end up at the top of this list.

In fairness, U.S. News is arguably doing the best it can with the available data. Data needed to gauge student learning growth are not available in ways that could be applied to all schools. And the rankings do incorporate some measures of student disadvantage, although these only apply weakly in the first two steps. The problem is that their work is branded as ranking which schools are best, but their methods don’t back that up.

What to do about it? According to Malkus, the change has already started. With the Every Student Succeeds Act, states now have the freedom to decide on their own measurements of growth – including how far students have come – on top of mere proficiency to evaluate schools’ performance in educating children. Six of 18 states have plans in place for these measurements, as well as for consequences for schools that don’t live up to state standards.

More states need to come up with appropriate evaluations. And this new data offers USNWR another tool to determine which schools do the best job giving students an adequate education. From there, we can see how well our kids are doing by comparison when faced with a variety of challenges or limited learning options.

Read the full report on the U.S. News & World Report rankings.

Sports Industry: The Economic Spillover of LeBron James

America loves its sports teams. There’s nothing like a cross-division rivalry to get people worked up and trash talking. Teams bring a great deal of pride to cities, and that’s why the years never blunt the hurt of a team’s move, whether it be the 1984 bolt by the Colts from Baltimore to Indianapolis or the 2020 planned move of the Raiders from Oakland to Las Vegas.

Cross-country movements of teams remain psychologically, if not economically, important to cities.

But individual athletes also have their impacts on a town. Some of it is cultural or behavioral. Star athletes can be propped up as hometown heroes, or if they misbehave, they can be shamed out of a city.

Take LeBron James, for instance. Practically a household name, James won the NBA’s MVP award four times. He won three NBA championships, and was part of two victorious US teams at the Olympics. He is a showman and a hard charger, and is welcome wherever he wants to play.

James’ move from his hometown Cleveland Cavaliers to the Miami Heat in 2010 felt like betrayal to the locals who watched him rise from a Northeastern Ohio upstart to an international superstar. Likewise, his move back to Cleveland in 2014 was treated like the prodigal son had returned home.

Now, a recent economic study concludes that James’ influence goes beyond pride. His popularity makes him a draw, but his presence has a significant economic impact on the communities where he plays.

We find that Mr. James has a statistically and economically significant positive effect on both the number of restaurants and other eating and drinking establishments near the stadium where he is based, and on aggregate employment at those establishments. Specifically, his presence increases the number of such establishments within one mile of the stadium by about 13 percent, and employment by about 23.5 percent. These effects are very local, in that they decay rapidly as one moves farther from the stadium.

Mapping out concentric circles to measure James’ impact around the sports facilities where he played, the study’s authors measured growth using employment and establishment data from Harvard’s Center for Geographic Analysis. They crunched the numbers to calculate the increase in food and beverage establishments and the number of employees in these industries within 10 miles of the Cleveland and Miami basketball stadiums.

The economists then ran a couple regression analyses and found that James’ presence increased the number of restaurants up to about seven miles.

The data show a downward trend in the number of restaurants in Cleveland between 2010 and 2014 that coincides with an upward trend in Miami. After Mr. James returned to the Cavaliers, the number of restaurants near the Quicken Loans Arena in Cleveland spiked, while the number of restaurants within a mile of the American Airlines Arena started to slide.

They also found a positive correlation between the number of regular-season wins won by the Cavaliers and the Heat and the number of restaurants located within one mile of the corresponding stadium.

But when they separated out the cities using a different formula, they found that James’ impact was greater in Cleveland than in Miami. So what can they conclude?

Two potential explanations come to mind. Perhaps Mr. James is particularly beloved in his native Ohio. Or maybe ‘superstar amenities’ are substitutes, not complements, and Miami has plenty of them even without Mr. James, generating fiercer competition and an attenuated impact of any specific superstar.

In other words, a town that has more to offer to its residents and visitors, an advantage Miami has over Cleveland, may feel less impact from the arrival or departure of a superstar athlete.

Whether or not you can draw a conclusion from this standalone study, it’s fun to consider. And more importantly, it suggests that it wouldn’t hurt to take care of our neighbors who make good. Their success reverberates like a stone skipping across the water.