Saturday, May 18, 2024

Models, Data and US Federal Reserve Policy Decisions

 


One January 11, 2024, the PBS News Hour interviewed Raphael Bostic, president of the Atlanta Federal Reserve Bank. One interesting part of the interview was the description of how models vs. data are used in the Federal Reserve decision making process, at least by Dr. Bostic. The role of economic models in decision making is interesting because on July 20, 2010 the US House of Representatives Committee on Science and Technology held a hearing on the topic (transcript here) and concluded the models were not very useful. My conclusion is that the US Fed is on the right path with some of the new models being developed. Future posts will give my recommendations (as a statistician) and my reasons for making them. It is essentially my forecast for the future of US Fed model building.

In the interview above, Amna Nawaz asked when Bostic expected the economy to reach the US Fed's 2% Inflation rate target. Bostic answered:

Well, we have models, and models will give us an answer...[but]...I don't put too much stock in any of those longer-term issues...I just try to keep an eye on where things are going month to month and try to just have a clear understanding about where we stand.

In other words, the models give us some long term predictions about Inflation and Economic Growth but, for month-to-month decision making we use data.

So what exactly can we get from models? I would hope that that the Atlanta Fed gets Prediction Intervals telling them that, say, Inflation might be between 1% and 3%, bracketing he 2% target. Then, as the data come in, it can be evaluated to answer Anna Nawaz's question. And, I would hope that the state of the economy would have some role in predicting the time path of inflation. Something like the prediction intervals produced by Climate Models:

The original topic of the News Hour interview was to gage the actual strength of the economy and consumers expectations about economic growth (although most of the interview concentrated on Inflation).


 For the strength of the economy, let's look at the Atlanta Fed's GDPNow forecast for economic growth (quarterly percentage change in real GDP). The output from the GDPNow app (presented above) compares the GDPNow forecast to the range of the top and bottom ten Blue Chip forecasts. GDPNow predicts GDP percentage changes well outside the Blue Chip forecasts until we get into March of 2023. What's going on here and why is this happening?



Maybe it would help to look at a longer time period. The St. Louis Fed publishes the GDPNow output from 2014-2024 (above and here). We can very clearly see the COVID shock, the economy's response, and the return to approximately at 2% growth rate. Note that it took approximately three years to recover from the COVID shock.

There is a lot to scratch your head about in the NewsHour interview and the outputs of the GDPNow model before we even get to thinking about the problem of inflation. Why don't the Blue Chip forecasts show the COVID shock? Why does the GDPNow cellphone App not go back to 2019, before COVID, to report results? And, which forecast should we believe, if any? 

It sounds as if, from Dr. Bostic's comments, that the FED ignores the forecasts and just waits for data to come in when making decisions about the economy. That's OK, but the FED spends a lot of time and money on large-scale, Dynamic Stochastic General Equilibrium  (DSGE) models (here), the models criticized in the Congressional Hearings, models that produce yet another set of forecasts. Worse yet, the DSGE models are based on assumptions that economic agents use models to form expectations about economic variables and use these expectations to make decisions, decisions that DSGE models attempt to predict.

But, which models specifically are economic agents using: the GDPNow model, the consensus of the twenty-or-so Blue Chip forecasting models, the forecasts of the DSGE models, or some other model entirely (I have my own models that are similar to but, I argue, an improvement over the GDPNow approach). I know the Fed is trying to be transparent and lay everything out on the table but what I'm looking at appears contradictory as it must have looked to Congressional Committees. And,  some commentators (here) and Congressmen (here) want to get rid of the Federal Reserve, Fed forecasts and Fed policy manipulations entirely.

Interestingly enough, the current problems with Economic Policy all point back to our failure to understand the Great Depression* and the effects of economic shocks (such as the WWI-WWII shocks and the COVID shock). In future posts, I'll try to untangle this mess** because I think it is interesting and important but not because I think any economic agents (to include the Fed and the ECB***) will be interested. Eventually, I will get around to looking at Inflation and Deflation!

Notes

* ChatGPT (here) lists the following causes for the Great Depression: (1) Stock Market Crash of 1929, (2) Bank Failures, (3) Reductions in Consumer Demand, (4) High Tariffs and Trade Barriers, (5) Monetary Policy Mistakes, (6) Debt Deflation, (7) Decline in International Economic Activity and (8) The Dust Bowl and Agricultural (Environmental) Collapse. 

** My working hypothesis is that we need to embed the US Economy within the World-System to not only understand the Great Depression but also to understand current economic policy confusions. The Fed doesn't really have a role for the World-System in its models.

*** The failure of Macro-economic models was also felt by the European Central Bank (ECB): "Macro models failed to predict the crisis and seemed incapable of explaining what was happening to the economy in a convincing manner".



Tuesday, April 30, 2024

Policy Models, Randomness and Free Will


In my research work I have developed a lot of policy models (see one about the US Health Care System here). If the models have any "policy recommendations" there is  very good chance that (1) no one is paying attention and (2) even if they were, they would not take advice from my models. If this is so futile, why do I keep doing it? If I have free will, maybe I should just drop my modeling fixations and play more golf.

All my policy models are based on the simple state-variable equation S(t) = F[S(t-1),X(t-1), E(t-1)] where S is the state of the system, F is some function, X are the input variables, E are the random(?) errors and t is time. In this post, I'm going to discuss the random components, E. By construction, the E are independent of the state variables, S, but what are they really? Let me take an example from Sub-Saharan African (SSA) because in an upcoming post I'm going to present an SSA model.



The state variables are constructed from the raw data in the World Development Indicators (WDI) using Principal Components Analysis (PCA). The variables are CO2E (CO2 emissions), EG.USE (Energy Use), GDP, Total Labor Force (TLF) and POP (Total Population). The numbers are weights and the choice of variables is based on the Kaya Identity. The first component state variable is overall Growth, the second is (CO2-N), and the third is (GDP-N).* The three components explain 99% of the variation in the indicators and these are typically all that are needed to construct the state space model S(t) = F[S(t-1),X(t-1), E(t-1)] . SSA2 and SSA3 are called the Error-Correcting Controllers (ECCs) that keep the system on the growth path (maybe). The E are components 4 and 5, but what are they really?


I have estimated a state space model for SSA5 and an output graph is presented above (the solid line is actual data and the dashed red line is predicted). Testing shows that it is not a Random Walk but the model doesn't do a very good job of predicting the series and misses all the turning points until after they are made. In other words, after we have exhausted 100% of the variation in the underling data, we don't find "randomness" but rather forces that we can't predict very well and, by themselves, explain little variance in the overall system. It's not that SSA5=(POP-EG.USE) is uninteresting (it is a Population Energy Demand ECC), it just doesn't explain a lot of variance in Sub-Saharan Africa. It might in other regions of the world, just not here.


What explains most of time path of SSA data? You'll just have to wait until I present the model in an upcoming post. The points I want to make here are that (1) If I try to forecast the population-energy ECC out to 2060 (above), probably the best I can say is that there is going to be a correction from 2024 and (2) when the correction will happen and to what extent it will happened,  the model cannot predict. When future data comes in, the model can detect if a correction was made. One the other hand, maybe a regional war will interrupt the adjustment process. Time will tell (not the model).

Estimating the model does not seem futile. At least it uncovers a low-variance feedback process covering population energy use. And, if you know anything about economic models, you will know that the economic models mostly do not contain feedback effects. 

Maybe another point to make to highlight Sabine Hossenfelder's video on Free Will (above) is that Sub-Saharan Africa cannot be said to have free will. I will demonstrate in a future post that it is a macro-system with many human institutions but these institutions don't have "will." Causes of policy actions might be too complex to predict ahead of time, but models can tell us what happened after the fact and clarify the underling causal forces. That is something and I want to keep doing it.

NOTES


* (CO2-N) is an Environmental ECC. (GDP-N) is a Malthusian Crisis ECC. I'll explain ECCs more fully in future posts. For the time being, these results (at least for SSA) indicate that environmental damage and Malthusian Crisis are being monitored (at least in SSA) in attempts to maintain the growth path. ECCs will be different in different regions and different countries in the World-System.


Friday, September 25, 2020

The Problem with Voting: Weak Feedback

 


The political system in the US is under pressure not only from the COVID-19 Pandemic but also from the Right Wing attempt to gerrymander voting districts, hack election results and disenfranchise minority voters (see Jennifer Cohn's Twitter postings, here). The question is, what to do about it.

David Easton's model of the political system (graphic above) provides a way to think about voting. In Easton's model, Political System is analyzed as a "system" with inputs, outputs, and feedback. The system is embedded in an Environment that involves the State and the World-System. In a Democracy, voting is a method of registering demands (think of ballot initiatives) and support (percent of the voting population participating in an election). 

If we trace voting through the Political System, once you have voted (either in person or absentee) the output is an electoral decision and the feedback is who won the election or what ballot initiatives were supported. The concern over the US Election System is that after you have voted, your vote may either not  have been counted or have been altered (hacked) once in electronic systems. In other words, the feedback about what happened to your vote is either very weak or nonexistent.

Some States (for example, MyVote Wisconsin) track your absentee ballot and tell you whether it has been received. This is a step forward, but it still doesn't make sure that your vote was actually counted, that the votes you cast were properly recorded and that election results were properly counted. It would be very straight forward to construct an electronic voting system that would do this, but the problem would be trust and security. You should be able to log in or go to your County Clerk's office and learn definitively how your vote was recorded. You should be able to validate the accuracy.

Even if you could do this, what to do next is another problem. Blockchain has an electronic dispute resolution and tracking mechanism that might prove useful to individuals and to voting watchdog groups. At this point, there is no perfect solution and the voting system feedback mechanism remains fundamentally weak.

Voting is not the only form of feedback in the Political System. Protest is another (see the analysis of Protest feedback here) and being used continuously in the US.

Wednesday, August 8, 2018

The Challenge of Digital Evidence


The Lawfare Podcasts, sponsored by the Brookings Institution, always seem to get below the surface and find more detail than is typically available in the electronic news media. The one above describes the problems digital service and media providers have responding to law enforcement requests (subpoenas) for digital evidence. I won't try to summarize the entire podcast but one topic caught my attention: the breadth of subpoenas.

Typically, a subpoena will include some specific evidence  needed in a criminal case (e.g., a list of cell phone calls a suspect has made, etc.) but then goes on to add "...in addition to all other relevant evidence." Of course this is the problem: who decides what is relevant and how much data can a company be compelled to provide.

Compare the digital search to a typical warrant. Law enforcement breaks into an office with a warrant. The warrant lists specific evidence e.g., documents and physical evidence. If law enforcement finds other relevant evidence of criminal activity, that evidence is also swept up in the search. If they are looking for emails, can they ask for a backup of the company's entire email system in case some other employee (not the target of an investigation) forwarded evidence while also adding incriminating comments? Challenges to the scope of a digital subpoena can tie up searches to the point that data begins to disappear.

The podcast discusses a proposal to have a central clearing house that can broker these kinds of problems. I would add one other suggestion. If you listen carefully to the podcast, the discussion proceeds at a very high level with few specifics about the typical kinds of searches law enforcement might execute. Basically, prosecutors and law enforcement do not want to miss something that might be relevant. This is the mentality of the system.

After 9/11 the Department of Homeland Security (DHS) explored ways to share more information between local law enforcement, the FBI and DHS. "Connecting the dots" became a priority. But it was impossible to get the Federal government to specify what they wanted. Local Law Enforcement did not feel comfortable sharing it's raw data which included names of citizens never charged with any crime e.g., witnesses. After much argument and miscommunication, DHS admitted they "wanted everything" and really just wanted to do Google searches on every bit of information they could store centrally. This is the mentality of the system.

My suggestion is for the central clearing house to work on specifying routine searches e.g., cell phone call records, and move on from there to digital financial records, personnel records, customer data, etc. Digital service and media providers can then work on the programming necessary to meet these requests in a routine way. This might be easier than it sounds because enterprise software providers e.g., Oracle, Microsoft, IBM, etc. can build these searches into commercial systems that are used by most all large companies.

At some point in the process of building search requirements we will get to unreasonable and ill-defined requests basically for everything. I cannot predict what will happen at this point but judges might have to decide that some requests cannot be met. The central clearing house could be an unbiased third party assuring the judicial system that a request has gone too far.

Tuesday, May 15, 2018

The Local Government Growth Treadmill


I can't imagine someone standing up at a Town or County Board meeting in the US and saying "I think we should stop growing, stop developing!" But, I have actually seen it. Typically, the questioner gets a polite but firm tongue lashing from someone who is good at that on the Council. Representatives who are in the pocket of some developer seldom stand up and say or admit it. So, it's easy to blame rampant corruption or political personalities or Right Wing ideology. Maybe? But I want to argue that it's the System!

The graphic above (click to enlarge) is a simplified model of the more complex path diagram floating around in my head. It really isn't substantially different from Jay Wright Forrester's (1969) Urban Dynamics model (which actually has some academic and real-world support here, here, and here). Follow the arrows:
  • Local population growth leads to an increase in consumer spending, construction spending and the tax base.
  • Local economic growth leads to more employment which, in turn, leads to an influx of job seekers and increase population growth
This is a very strong positive feedback loop. Break out "Local Economic Growth" and you find banks, realtors, developers, builders, retail businesses, and local government--a coalition of powerful interests.

All systems have negative feedback loops that try to control overdevelopment, but the question is "how strong are the negative loops":
  • Economic growth and local population growth have environmental impacts: (1) Depletion and contamination of local water supplies, (2) Loss of areas to handle water runoff, (3) Increased pollution, (4) Loss of farmland, etc. etc.
  • Environmental degradation decreases the Quality of Life in a community, making it less attractive for in-migration.
Except for a small number of very concerned communities, this negative feedback loop is slow acting with long time delays. Typically, when a community finds out it has degraded the environment, it's too late (Dallas, TX is a good example here).

The interesting thing about feedback loops is that they can be reversed by external shocks.
  • A large, negative shock to Local Economic Growth from the National Economy can reverse the entire process as happened during the Subprime Mortgage Crisis in the US.
  • Tinkering with the housing market at the national level negatively impacted the banking system, the construction industry, and the entire economy. As incomes fell, the tax base fell and local governments found it difficult to provide even their base functions. Developers abandoned entire tracks creating a new kind of ghost town in States such as Arizona and Nevada.
Cities and Counties could not wait to get back on the positive feedback cycle and some communities were surprised by how rapidly large tracks of land were zoned and platted for new subdivisions well ahead of any improvement in the local economy (here).

EXERCISES
  1. How was your local community affected by the Subprime Mortgage Crisis. Check your on-line newspaper archive. How were banks, retail business, construction and government finance affected? Are you still feeling any of the effects today?
  2. Go to a meeting of the local Planning Commission, get a flavor for how new developments are approved. If there is a Town or City Master Plan, get a copy and read it. What things surprised you about how the Planning Commission operated?
  3. Modify and expand the path diagram above to fit your local community. How good is the theory?




Tuesday, November 7, 2017