Thursday, October 31, 2024

It's the Economy, Stupid!


The title of this post is taken from a quote typically attributed to James Carville, a political advisor during the Clinton Administration. It might equally have been attributed to Karl Marx's emphasis on Economic Determinism. The graphic above is from a 2024 PEW Research Center poll which finds that 81% of Voters are primarily concerned with the Economy.  Other issues come in a distant second.


The paradox here is that voters are generally uninformed about: (1) the state of the Economic System, (2) simple ECON 101 Economic Theory, (3) how the Political system works and (4) whether successive political administrations have any impact on the Economic System or are just passive bystanders taking blame for successive economic crises.  The graphic above (click to enlarge) tries to map out some of the confusion.

The basic inputs to the Political System are voting and opinion polls. The things that are under control of decision makers within the Political System are government expenditure (G), Taxation (TAX) and the Money Supply (M). How G, TAX and M affect the Economic System and Economic Outputs are open to argument. And, as I have been arguing here, Shocks to both the Political and Economic Systems are when we find out how, if at all, the two systems work. There should ideally be some Feedback between all this and voting behavior, but there are such long time delays that voters tend to forget causal relationships (see my discussion here).


So, what is the Political System really supposed to do? Here's an old idea that goes back to 17th Century terminology: the Political System is supposed to act as a Centrifugal Governor. The idea also appears in Arnold Tustin's book The Mechanism of Economic Systems (1953). When the centrifugal governor rotates more rapidly it slows down the steam engine, for example. Even if 17th Century Politicians thought this was the way a Political Governor was supposed to work, it no longer is in the 20th and 21st Centuries. The Political System is not supposed to slow down the Economic System.

It is unfortunate that the 17th Century idea of a Political Governor has been lost (we have Neoliberalism to thank), especially since Climate Change has slipped to one of the least important issues for voters. Slowing down the Economic System is one proven way to reduce CO2 Emissions (as was proven during the COVID-19 lockdowns).
 

 

Friday, June 7, 2024

The Lessons of the 1930s

 


Yesterday, Jan 6, 1944 was the start of Operation Overlord (D-Day). Senator Mitch McConnell, in an NYT editorial (here), reminds us not to forget the Lessons of the 1930s, the main one he thinks involved the Isolation Movement. We should all understand that the editorial is an oblique attack on the MAGA (Make America Great Again) movement and the current Republican presidential candidate. But what were the Lessons of the 1930s? Let's ask an AI System (here).

In response to a query, ChatGPT lists eleven lessons from the period:

  1. Dangers of Unregulated Markets
  2. The Role of Government in Economic Stabilization
  3. The Importance of Monetary Policy
  4. The Importance Social Safety Nets
  5. The Rise of Authoritarianism
  6. The Power of Collective Action
  7. The Consequences of Isolation
  8. The Importance of Economic Cooperation
  9. Technological Innovation in Crisis
  10. The Power of Media and Communication
  11. The Impact of environmental Mismanagement (the Dust Bowl)
My guess is that Senator McConnell would only consider #7 and then I still find his entire editorial strange since, at the end of the election cycle, he will still support the MAGA movement and the current GOP presidential candidate, neither of which are likely to change their position on US Isolationism.

Senator McConnell, however, is right that we have forgotten the Lessons of the 1930's. We basically have the rise of Neoliberalism in the 1980's to thank for our failing memory--and that, of course, was its purpose. In future posts, I hope to resurrect the Lessons of the 1930s and the failures of Neoliberalism. The topic is almost too difficult to contemplate as a project but there is plenty to write about. Feel free to grab any of the topics and blog about it yourself. I can't do everything!

P.S. The D-Day anniversary has been an emotional day for me. It is the "last hurrah" for the remaining WWII Veterans who are within years of being no longer able to comment on the current resurrection of Fascism.


Saturday, May 18, 2024

Models, Data and US Federal Reserve Policy Decisions

 


One January 11, 2024, the PBS News Hour interviewed Raphael Bostic, president of the Atlanta Federal Reserve Bank. One interesting part of the interview was the description of how models vs. data are used in the Federal Reserve decision making process, at least by Dr. Bostic. The role of economic models in decision making is interesting because on July 20, 2010 the US House of Representatives Committee on Science and Technology held a hearing on the topic (transcript here) and concluded the models were not very useful. My conclusion is that the US Fed is on the right path with some of the new models being developed. Future posts will give my recommendations (as a statistician) and my reasons for making them. It is essentially my forecast for the future of US Fed model building.

In the interview above, Amna Nawaz asked when Bostic expected the economy to reach the US Fed's 2% Inflation rate target. Bostic answered:

Well, we have models, and models will give us an answer...[but]...I don't put too much stock in any of those longer-term issues...I just try to keep an eye on where things are going month to month and try to just have a clear understanding about where we stand.

In other words, the models give us some long term predictions about Inflation and Economic Growth but, for month-to-month decision making we use data.

So what exactly can we get from models? I would hope that that the Atlanta Fed gets Prediction Intervals telling them that, say, Inflation might be between 1% and 3%, bracketing he 2% target. Then, as the data come in, it can be evaluated to answer Anna Nawaz's question. And, I would hope that the state of the economy would have some role in predicting the time path of inflation. Something like the prediction intervals produced by Climate Models:

The original topic of the News Hour interview was to gage the actual strength of the economy and consumers expectations about economic growth (although most of the interview concentrated on Inflation).


 For the strength of the economy, let's look at the Atlanta Fed's GDPNow forecast for economic growth (quarterly percentage change in real GDP). The output from the GDPNow app (presented above) compares the GDPNow forecast to the range of the top and bottom ten Blue Chip forecasts. GDPNow predicts GDP percentage changes well outside the Blue Chip forecasts until we get into March of 2023. What's going on here and why is this happening?



Maybe it would help to look at a longer time period. The St. Louis Fed publishes the GDPNow output from 2014-2024 (above and here). We can very clearly see the COVID shock, the economy's response, and the return to approximately at 2% growth rate. Note that it took approximately three years to recover from the COVID shock.

There is a lot to scratch your head about in the NewsHour interview and the outputs of the GDPNow model before we even get to thinking about the problem of inflation. Why don't the Blue Chip forecasts show the COVID shock? Why does the GDPNow cellphone App not go back to 2019, before COVID, to report results? And, which forecast should we believe, if any? 

It sounds as if, from Dr. Bostic's comments, that the FED ignores the forecasts and just waits for data to come in when making decisions about the economy. That's OK, but the FED spends a lot of time and money on large-scale, Dynamic Stochastic General Equilibrium  (DSGE) models (here), the models criticized in the Congressional Hearings, models that produce yet another set of forecasts. Worse yet, the DSGE models are based on assumptions that economic agents use models to form expectations about economic variables and use these expectations to make decisions, decisions that DSGE models attempt to predict.

But, which models specifically are economic agents using: the GDPNow model, the consensus of the twenty-or-so Blue Chip forecasting models, the forecasts of the DSGE models, or some other model entirely (I have my own models that are similar to but, I argue, an improvement over the GDPNow approach). I know the Fed is trying to be transparent and lay everything out on the table but what I'm looking at appears contradictory as it must have looked to Congressional Committees. And,  some commentators (here) and Congressmen (here) want to get rid of the Federal Reserve, Fed forecasts and Fed policy manipulations entirely.

Interestingly enough, the current problems with Economic Policy all point back to our failure to understand the Great Depression* and the effects of economic shocks (such as the WWI-WWII shocks and the COVID shock). In future posts, I'll try to untangle this mess** because I think it is interesting and important but not because I think any economic agents (to include the Fed and the ECB***) will be interested. Eventually, I will get around to looking at Inflation and Deflation!

Notes

* ChatGPT (here) lists the following causes for the Great Depression: (1) Stock Market Crash of 1929, (2) Bank Failures, (3) Reductions in Consumer Demand, (4) High Tariffs and Trade Barriers, (5) Monetary Policy Mistakes, (6) Debt Deflation, (7) Decline in International Economic Activity and (8) The Dust Bowl and Agricultural (Environmental) Collapse. 

** My working hypothesis is that we need to embed the US Economy within the World-System to not only understand the Great Depression but also to understand current economic policy confusions. The Fed doesn't really have a role for the World-System in its models.

*** The failure of Macro-economic models was also felt by the European Central Bank (ECB): "Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner".



Tuesday, April 30, 2024

Policy Models, Randomness and Free Will


In my research work I have developed a lot of policy models (see one about the US Health Care System here). If the models have any "policy recommendations" there is  very good chance that (1) no one is paying attention and (2) even if they were, they would not take advice from my models. If this is so futile, why do I keep doing it? If I have free will, maybe I should just drop my modeling fixations and play more golf.

All my policy models are based on the simple state-variable equation S(t) = F[S(t-1),X(t-1), E(t-1)] where S is the state of the system, F is some function, X are the input variables, E are the random(?) errors and t is time. In this post, I'm going to discuss the random components, E. By construction, the E are independent of the state variables, S, but what are they really? Let me take an example from Sub-Saharan African (SSA) because in an upcoming post I'm going to present an SSA model.



The state variables are constructed from the raw data in the World Development Indicators (WDI) using Principal Components Analysis (PCA). The variables are CO2E (CO2 emissions), EG.USE (Energy Use), GDP, Total Labor Force (TLF) and POP (Total Population). The numbers are weights and the choice of variables is based on the Kaya Identity. The first component state variable is overall Growth, the second is (CO2-N), and the third is (GDP-N).* The three components explain 99% of the variation in the indicators and these are typically all that are needed to construct the state space model S(t) = F[S(t-1),X(t-1), E(t-1)] . SSA2 and SSA3 are called the Error-Correcting Controllers (ECCs) that keep the system on the growth path (maybe). The E are components 4 and 5, but what are they really?


I have estimated a state space model for SSA5 and an output graph is presented above (the solid line is actual data and the dashed red line is predicted). Testing shows that it is not a Random Walk but the model doesn't do a very good job of predicting the series and misses all the turning points until after they are made. In other words, after we have exhausted 100% of the variation in the underling data, we don't find "randomness" but rather forces that we can't predict very well and, by themselves, explain little variance in the overall system. It's not that SSA5=(POP-EG.USE) is uninteresting (it is a Population Energy Demand ECC), it just doesn't explain a lot of variance in Sub-Saharan Africa. It might in other regions of the world, just not here.


What explains most of time path of SSA data? You'll just have to wait until I present the model in an upcoming post. The points I want to make here are that (1) If I try to forecast the population-energy ECC out to 2060 (above), probably the best I can say is that there is going to be a correction from 2024 and (2) when the correction will happen and to what extent it will happened,  the model cannot predict. When future data comes in, the model can detect if a correction was made. One the other hand, maybe a regional war will interrupt the adjustment process. Time will tell (not the model).

Estimating the model does not seem futile. At least it uncovers a low-variance feedback process covering population energy use. And, if you know anything about economic models, you will know that the economic models mostly do not contain feedback effects. 

Maybe another point to make to highlight Sabine Hossenfelder's video on Free Will (above) is that Sub-Saharan Africa cannot be said to have free will. I will demonstrate in a future post that it is a macro-system with many human institutions but these institutions don't have "will." Causes of policy actions might be too complex to predict ahead of time, but models can tell us what happened after the fact and clarify the underling causal forces. That is something and I want to keep doing it.

NOTES


* (CO2-N) is an Environmental ECC. (GDP-N) is a Malthusian Crisis ECC. I'll explain ECCs more fully in future posts. For the time being, these results (at least for SSA) indicate that environmental damage and Malthusian Crisis are being monitored (at least in SSA) in attempts to maintain the growth path. ECCs will be different in different regions and different countries in the World-System.


Friday, September 25, 2020

The Problem with Voting: Weak Feedback

 


The political system in the US is under pressure not only from the COVID-19 Pandemic but also from the Right Wing attempt to gerrymander voting districts, hack election results and disenfranchise minority voters (see Jennifer Cohn's Twitter postings, here). The question is, what to do about it.

David Easton's model of the political system (graphic above) provides a way to think about voting. In Easton's model, Political System is analyzed as a "system" with inputs, outputs, and feedback. The system is embedded in an Environment that involves the State and the World-System. In a Democracy, voting is a method of registering demands (think of ballot initiatives) and support (percent of the voting population participating in an election). 

If we trace voting through the Political System, once you have voted (either in person or absentee) the output is an electoral decision and the feedback is who won the election or what ballot initiatives were supported. The concern over the US Election System is that after you have voted, your vote may either not  have been counted or have been altered (hacked) once in electronic systems. In other words, the feedback about what happened to your vote is either very weak or nonexistent.

Some States (for example, MyVote Wisconsin) track your absentee ballot and tell you whether it has been received. This is a step forward, but it still doesn't make sure that your vote was actually counted, that the votes you cast were properly recorded and that election results were properly counted. It would be very straight forward to construct an electronic voting system that would do this, but the problem would be trust and security. You should be able to log in or go to your County Clerk's office and learn definitively how your vote was recorded. You should be able to validate the accuracy.

Even if you could do this, what to do next is another problem. Blockchain has an electronic dispute resolution and tracking mechanism that might prove useful to individuals and to voting watchdog groups. At this point, there is no perfect solution and the voting system feedback mechanism remains fundamentally weak.

Voting is not the only form of feedback in the Political System. Protest is another (see the analysis of Protest feedback here) and being used continuously in the US.

Wednesday, August 8, 2018

The Challenge of Digital Evidence


The Lawfare Podcasts, sponsored by the Brookings Institution, always seem to get below the surface and find more detail than is typically available in the electronic news media. The one above describes the problems digital service and media providers have responding to law enforcement requests (subpoenas) for digital evidence. I won't try to summarize the entire podcast but one topic caught my attention: the breadth of subpoenas.

Typically, a subpoena will include some specific evidence  needed in a criminal case (e.g., a list of cell phone calls a suspect has made, etc.) but then goes on to add "...in addition to all other relevant evidence." Of course this is the problem: who decides what is relevant and how much data can a company be compelled to provide.

Compare the digital search to a typical warrant. Law enforcement breaks into an office with a warrant. The warrant lists specific evidence e.g., documents and physical evidence. If law enforcement finds other relevant evidence of criminal activity, that evidence is also swept up in the search. If they are looking for emails, can they ask for a backup of the company's entire email system in case some other employee (not the target of an investigation) forwarded evidence while also adding incriminating comments? Challenges to the scope of a digital subpoena can tie up searches to the point that data begins to disappear.

The podcast discusses a proposal to have a central clearing house that can broker these kinds of problems. I would add one other suggestion. If you listen carefully to the podcast, the discussion proceeds at a very high level with few specifics about the typical kinds of searches law enforcement might execute. Basically, prosecutors and law enforcement do not want to miss something that might be relevant. This is the mentality of the system.

After 9/11 the Department of Homeland Security (DHS) explored ways to share more information between local law enforcement, the FBI and DHS. "Connecting the dots" became a priority. But it was impossible to get the Federal government to specify what they wanted. Local Law Enforcement did not feel comfortable sharing it's raw data which included names of citizens never charged with any crime e.g., witnesses. After much argument and miscommunication, DHS admitted they "wanted everything" and really just wanted to do Google searches on every bit of information they could store centrally. This is the mentality of the system.

My suggestion is for the central clearing house to work on specifying routine searches e.g., cell phone call records, and move on from there to digital financial records, personnel records, customer data, etc. Digital service and media providers can then work on the programming necessary to meet these requests in a routine way. This might be easier than it sounds because enterprise software providers e.g., Oracle, Microsoft, IBM, etc. can build these searches into commercial systems that are used by most all large companies.

At some point in the process of building search requirements we will get to unreasonable and ill-defined requests basically for everything. I cannot predict what will happen at this point but judges might have to decide that some requests cannot be met. The central clearing house could be an unbiased third party assuring the judicial system that a request has gone too far.