Really neat research here from Adam Ramey, on the interaction of personality and partisanship in Congress. The basic thesis is that partisanship isn’t everything – that personality also matters. Ramey used some “recent methods” in text analysis to analyze floor speeches and generate personality scores for each Member of Congress. He then uses the model to generate simulations of how these personality traits interact with partisanship to explain cosponsorship, absences, etcetera. For example, there’s one of his simpler charts – describing how measured conscientiousness predicts the number of absences. Much as we might expect, the more conscientious Congresspeople miss fewer votes.
One thing I’m very curious about is the alignment of Congresspeople with their districts. You’ll occasionally see these maps of personality traits as they vary across the country – New England is more neurotic and more open, Minnesota is more agreeable. It would be interesting to examine congruence between districts and their Congresspeople. Especially given historical data, there’s some very interesting work to be done. Perhaps with the ideological sorting of the late 20th century, Congresspeople came into better congruence with their districts. Or perhaps the personality traits of Congresspeople turn out to be independent of their districts.
One thing I would worry about with this study is the data source – Ramey is using floor speeches from Congress. There are many things that might best represent the true spirit or personality of a person, but I’m not sure that floor speeches are the best way. They’re formal, they’re pre-composed, and they’re mostly written by staff committee. Even if they genuinely represented a Congressperson’s personality, we might still have selection bias if Congresspeople choose when to speak or not to speak. For example, a particularly ill-tempered Congressperson might appear milder on paper if his Chief of Staff has any sense at all. I’m not able to comment on the specific methods Ramey is using, but this does seem like a large potential source of concern in drawing conclusions.
Awesome and welcome news about the price of solar power – it has now reached price parity with fossil fuels in certain markets. Today it’s developing economies in Asia – but it’s coming for everywhere else too. The march of solar power continues apace, and short of a shocking and unpredicted slowdown in its development it will very quickly become the obvious choice for new power generation installation. As Ambrose Evans-Pritchard points out in the Telegraph, a secular decline in oil demand and thus prices will completely hammer rentier petrostates like Saudi Arabia, the Gulf States, and yes, Russia. For the last forty years or so, oil was the only thing propping up the economy of Russia and the USSR before that.
At the margin, this should probably make us more frightened about Russia’s intentions in Eastern Europe. Vladimir Putin is not a stupid man, and unless his intelligence service is completely dysfunctional, they have likely been mulling over the alarming price charts of solar for the past four or five years. As oil demand starts to slacken, it will completely destroy the government finances and military capability of the Russian state. Without an endless river of oil money, it might prove difficult to keep buying off elites and regular Russians. It will certainly prove difficult to maintain its relatively high degree of regional military power, especially after the extremely expensive professionalization and modernization reforms of the past few years.
Russia might at this moment be more relatively powerful than it will be for the foreseeable future – or ever again, perhaps. Steven van Evera writes that we should be concerned about wars as great powers decline, in order to seize advantages before they disappear. It’s certainly hard to imagine that Russia will ever be better-positioned for a geopolitical power grab against its neighbors than it is in April, 2014. If Putin wants to regain Novorossiya, the clock is ticking.
I identify a lot with Frederik deBoer, who has a liberal arts background and is getting a crash course in quantitative methods as a grown-ass adult. He has a nice piece on “Big Data” and its broken promises. One of the most important is the simple fact that basically anything is “statistically significant” in a large-enough dataset, for which he offers a pretty good layman’s explanation:
Think of statistical significance like this. Suppose I came to you and claimed that I had found an unbalanced or trick quarter, one that was more likely to come up heads than tails. As proof, I tell you that I had flipped the quarter 15 times, and 10 times of those times, it had come up heads. Would you take that as acceptable proof? You would not; with that small number of trials, there is a relatively high probability that I would get that result simply through random chance. In fact, we could calculate that probability easily. But if I instead said to you that I had flipped the coin 15,000 times, and it had come up heads 10,000 times, you would accept my claim that the coin was weighted. Again, we could calculate the odds that this happened by random chance, which would be quite low– close to zero. This example shows that we have a somewhat intuitive understanding of what we mean by statistically significant. We call something significant if it has a low p-value, or the chance that a given quantitative result is the product of random error. (Remember, in statistics, error = inevitable, bias = really bad.) The p-value of the 15 trial example would be relatively high, too high to trust the result. The p-value of the 15,000 trial example, low enough to be treated as zero.
With a big dataset, everything has a very low p-value. When you run it through most predictive models and test it on every possible variable, you will get the result that every independent variable impacts the result, and the p-value is low enough you can discard the possibility that it is a result of random chance. This is especially troublesome when your sample, or n, is extremely large yet represents only a small portion of the overall result. You will not face the issue that your results are driven by random variations of the sample, but you are very likely to face the problem that your sample isn’t representative. If there is some selection bias in what datapoints are sampled, you could be producing very poor models of reality. But judging whether your data is representative is difficult, because it’s difficult to make statements about missing data. For really big datasets, making sure that your data is representative is a devilishly difficult and extremely important problem.
The promise of “Big Data” is that “n=All”, a common statement amongst its proponents. Yet this is rarely the case, often because the data capture is process and often because of computational practicality. The most fancy machine learning models in the world won’t help you if your data is of poor quality, and pretty simple methods like linear regression can be incredibly powerful when faced with the right methods. Fancy machine learning techniques do have their uses, though – for example, regression offers little help in the problem of “feature selection“, or choosing which variables to use in constructing a predictive model. This is a pretty important problem when you are looking at a problem without strong theoretical avenues of investigation.
The most important lesson I have learned from my intensive study of statistics is that certainty is elusive. To people with no serious quantitative training (including myself nine months ago), you imagine that statistical inference works like high-school math problems. You “run the numbers”, and an answer pops out. But this is almost never the case, and successful inference involves many layers of judgment in data gathering, data processing, and model-building. There are definitely wrong answers – for example, when faced with a problem that calls for a prediction of probability, you can’t assume linearity. If you do, some predicted values might be negative and negative probabilities are a nonsensical idea. But answers can’t be “right”, they can only be defensible. And beyond that, reasonable intelligent people can disagree violently about which answers are defensible.
Learning statistics has been invigorating as I realize just how much is possible with a dataset and an old laptop, and humbling as I realize that statistical investigation is a lot more difficult than learning the commands in R.
On pure policy terms, I like the idea of federalizing education spending, as Felix Salmon. It will almost certainly result in more educational equity, a good thing in and of itself. In terms of knock-on effect, it’s definitely a good thing that federal money usually comes with strings attached. It is unlikely to affect states like Massachusetts that take education seriously already, and seems like a good way to prod Mississippi and Alabama in better directions. It is unlikely that Deep South state governments will take a deep interest in the education of poor black children without an awful lot of federal coercion.
That being said, I worry that the political economy of federalized education spending is unsustainable. Property taxes are an inherently unfair way to fund schools and virtually guarantee that education will be inequitably provided. But it does serve the very important function of a clear and visible social contract, where residents are both funding and receiving public goods. If you’ve ever lived in a state like New Hampshire, with “donor” and “receiver” districts, you know that redistribution of school funding can make politics pretty bitter and divisive. Taking it to a federal level will make this problem worse, in much the same way that the fight over healthcare has gone – people with money and power really hate redistribution.
Furthermore, education spending is a form of social investment, and the federal government generally seems to underinvest. Just look at the generally-deplorable levels of public infrastructure. If we can’t trust the federal government to adequately provide structurally sound bridges and roads, how can we expect them to adequately prepare the minds of the next generation. Especially given that metrics of education quality are necessarily more abstruse and poorly understood, it is a lot easier for the feds to skimp on spending without immediately seeing worse results. The looser feedback loop (compared to, say, collapsing bridges) suggests that the federal government won’t be particularly responsive to declining education quality resulting from budget cuts.
I don’t know that there’s a first-best resolution to this issue. Locally funded education has a lot of problems, principal among them inequity. This is both an inequity of resources and of attention – well-educated districts are likely not only to be wealthier, but more committed to the principle of generously-funded schools, and inequality is entrenched on many levels. On balance, federalizing the system might work somewhat better – but it’s not immediately apparent that’s the case. And even if equity rises, I think it’s entirely possible that overall school quality falls a lot. One result of our extremely unequal status quo is a relatively large number of very good suburban public schools that would be devastated by the loss of resources. This is a policy issue with a pretty rough political economy, and while there are better possible worlds out there it’s definitely not immediately clear how we get there from here.
Dr. Greg Brannon is running for Senate in North Carolina. Dr. Brannon has some unorthodox beliefs. Those beliefs include some unusual opinions about flouride and brain-implanted microchips. Dr. Brannon used to expound those beliefs on his website. Dr Brannon no longer wishes these beliefs to be public. Now, he has the much more reasonable belief that his other beliefs might be a hindrance in a Republican Senatorial primary. So Dr. Brannon has a problem and he would like FoundersTruth to go away – the website is down, but caches are forever. So Dr. Brannon requested that the Internet Archive (a private nonprofit) take down the cached copy of his site. The Internet Archive has, apparently, complied.
There is a serious and unresolved policy question here – as more and more keeping of “public” records devolves to private firms, what is the public interest here? It seems that keeping Internet history both stored and generally available is a matter of public concern, yet right now this isn’t done. I understand the Library of Congress does some of this, but not in a easily-searchable desktop version or anything like that. And the question is even more pressing as the internet is increasingly accessed through apps and other closed services. Twitter is mostly on the public internet, Facebook somewhat less so – but in either case, the information’s accessibility and retention is dictated entirely by private companies.
It would be nice to see Internet archival and accessibility treated as a matter for public concern and thus public funding. Surely we all need to know about Dr. Brannon and his bold ideas. However, it seems more likely that information accessibility will either go unaddressed or be a topic for heavy-handed government regulation of internet firms. It’s kind of a shame, because the costs of this are so low compared to feeding the hungry, caring for the sick, or launching ill-advised military interventions abroad.
Seth Masket ably makes the case for lobbying elderly Supreme Court Justices to step down. Ginsburg and Breyer are quite elderly, and to be frank about it, are unlikely to make it to the next Democratic President after Obama. And we know that Supreme Court Justices are a type of politician, albeit much more principled than most. Supreme Court Justices are a lot like the type of politicians most people would say they want, who act with judgment and foresight to advance a vision of what is best for the country based on their experience, insight and wisdom. Frankly, as a liberal, I would be pretty relieved if either of them decided to step down tomorrow and make way for a younger and healthier liberal Justice. I see the case for pressuring them to step down.
The authority the Supreme Court relies on is in some way a sleight of hand. It is the pretense they are above politics and sit in sober disinterested judgment – which we all kind of know is a pretense but still badly want to believe. Their legitimacy isn’t from popular acclamation and it certainly isn’t from their political activism – but they both have and need legitimacy. In order to properly perform their constitutional duties, Supreme Court Justices need to be unafraid to take on political authorities when it is truly necessary. There is some evidence that when their legitimacy is in question by the public, they are less willing to do so.
So while political scientists know that Supreme Court justices are really political actors, it’d probably be best if that didn’t become conventional wisdom. Frankly, the normative issues seem pretty conflicted and there’s not necessarily an easy answer.
There’s a fantastic story about the “gig economy” in Fast Company, meaning companies like Taskrabbit, Exec, and so on. These are companies that promise to let you work the hours you want on the jobs you want and provide a more flexible way to work. So how is it trying to cobble together a living income walking dogs and delivering sandwiches via your smartphone? Well, it kind of sucks.
This makes sense. Wages are a function of supply and demand, and demand doesn’t just mean the demand of companies for workers. It also refers to how willing workers are to perform the work – workers demand a wage premium for work that is difficult or demanding, which is a large part of why plumbers and garbagemen get paid better than most blue-collar laborers. Imagine I was to offer you two jobs doing roughly equivalent unskilled labor. One is on a regular schedule and relatively simple but the other requires you to scramble constantly, take orders via the phone, prone to cancellation on very short notice and with no visibility into schedules beyond the next day. You’d demand a much higher wage for the second job. Yet the Taskrabbits of the world are offering less money than minimum wage.
I think that these companies are mostly a fleeting phenomenon of the financial crisis. The economics of these things mostly work out when you can pay people insultingly small amounts. It works great when the gigs are digitized like Odesk or Elance, and you can take advantage of wage differentials to have everything done in Bangladesh or the Philippines. But that’s extremely hard to do with meatspace work, where you have to rely on First World labor and pay them more money to make up for the hassle and unpredictability of the work. It seemed, in the slack labor markets of the last five years, that you could get the economics to work out because of the hordes of unemployed.
But the Fast Company article suggests that when the slack finally tightens a bit, the cost structures of these ventures will turn prohibitive pretty quickly.