Thursday, October 31, 2024
It's the Economy, Stupid!
Friday, June 7, 2024
The Lessons of the 1930s
Yesterday, Jan 6, 1944 was the start of Operation Overlord (D-Day). Senator Mitch McConnell, in an NYT editorial (here), reminds us not to forget the Lessons of the 1930s, the main one he thinks involved the Isolation Movement. We should all understand that the editorial is an oblique attack on the MAGA (Make America Great Again) movement and the current Republican presidential candidate. But what were the Lessons of the 1930s? Let's ask an AI System (here).
In response to a query, ChatGPT lists eleven lessons from the period:
- Dangers of Unregulated Markets
- The Role of Government in Economic Stabilization
- The Importance of Monetary Policy
- The Importance Social Safety Nets
- The Rise of Authoritarianism
- The Power of Collective Action
- The Consequences of Isolation
- The Importance of Economic Cooperation
- Technological Innovation in Crisis
- The Power of Media and Communication
- The Impact of environmental Mismanagement (the Dust Bowl)
Saturday, May 18, 2024
Models, Data and US Federal Reserve Policy Decisions
One January 11, 2024, the PBS News Hour interviewed Raphael Bostic, president of the Atlanta Federal Reserve Bank. One interesting part of the interview was the description of how models vs. data are used in the Federal Reserve decision making process, at least by Dr. Bostic. The role of economic models in decision making is interesting because on July 20, 2010 the US House of Representatives Committee on Science and Technology held a hearing on the topic (transcript here) and concluded the models were not very useful. My conclusion is that the US Fed is on the right path with some of the new models being developed. Future posts will give my recommendations (as a statistician) and my reasons for making them. It is essentially my forecast for the future of US Fed model building.
In the interview above, Amna Nawaz asked when Bostic expected the economy to reach the US Fed's 2% Inflation rate target. Bostic answered:
Well, we have models, and models will give us an answer...[but]...I don't put too much stock in any of those longer-term issues...I just try to keep an eye on where things are going month to month and try to just have a clear understanding about where we stand.
In other words, the models give us some long term predictions about Inflation and Economic Growth but, for month-to-month decision making we use data.
So what exactly can we get from models? I would hope that that the Atlanta Fed gets Prediction Intervals telling them that, say, Inflation might be between 1% and 3%, bracketing he 2% target. Then, as the data come in, it can be evaluated to answer Anna Nawaz's question. And, I would hope that the state of the economy would have some role in predicting the time path of inflation. Something like the prediction intervals produced by Climate Models:
The original topic of the News Hour interview was to gage the actual strength of the economy and consumers expectations about economic growth (although most of the interview concentrated on Inflation).
For the strength of the economy, let's look at the Atlanta Fed's GDPNow forecast for economic growth (quarterly percentage change in real GDP). The output from the GDPNow app (presented above) compares the GDPNow forecast to the range of the top and bottom ten Blue Chip forecasts. GDPNow predicts GDP percentage changes well outside the Blue Chip forecasts until we get into March of 2023. What's going on here and why is this happening?
Notes
Tuesday, April 30, 2024
Policy Models, Randomness and Free Will
In my research work I have developed a lot of policy models (see one about the US Health Care System here). If the models have any "policy recommendations" there is very good chance that (1) no one is paying attention and (2) even if they were, they would not take advice from my models. If this is so futile, why do I keep doing it? If I have free will, maybe I should just drop my modeling fixations and play more golf.
All my policy models are based on the simple state-variable equation S(t) = F[S(t-1),X(t-1), E(t-1)] where S is the state of the system, F is some function, X are the input variables, E are the random(?) errors and t is time. In this post, I'm going to discuss the random components, E. By construction, the E are independent of the state variables, S, but what are they really? Let me take an example from Sub-Saharan African (SSA) because in an upcoming post I'm going to present an SSA model.
The state variables are constructed from the raw data in the World Development Indicators (WDI) using Principal Components Analysis (PCA). The variables are CO2E (CO2 emissions), EG.USE (Energy Use), GDP, Total Labor Force (TLF) and POP (Total Population). The numbers are weights and the choice of variables is based on the Kaya Identity. The first component state variable is overall Growth, the second is (CO2-N), and the third is (GDP-N).* The three components explain 99% of the variation in the indicators and these are typically all that are needed to construct the state space model S(t) = F[S(t-1),X(t-1), E(t-1)] . SSA2 and SSA3 are called the Error-Correcting Controllers (ECCs) that keep the system on the growth path (maybe). The E are components 4 and 5, but what are they really?
NOTES
Friday, September 25, 2020
The Problem with Voting: Weak Feedback
The political system in the US is under pressure not only from the COVID-19 Pandemic but also from the Right Wing attempt to gerrymander voting districts, hack election results and disenfranchise minority voters (see Jennifer Cohn's Twitter postings, here). The question is, what to do about it.
David Easton's model of the political system (graphic above) provides a way to think about voting. In Easton's model, Political System is analyzed as a "system" with inputs, outputs, and feedback. The system is embedded in an Environment that involves the State and the World-System. In a Democracy, voting is a method of registering demands (think of ballot initiatives) and support (percent of the voting population participating in an election).
If we trace voting through the Political System, once you have voted (either in person or absentee) the output is an electoral decision and the feedback is who won the election or what ballot initiatives were supported. The concern over the US Election System is that after you have voted, your vote may either not have been counted or have been altered (hacked) once in electronic systems. In other words, the feedback about what happened to your vote is either very weak or nonexistent.
Some States (for example, MyVote Wisconsin) track your absentee ballot and tell you whether it has been received. This is a step forward, but it still doesn't make sure that your vote was actually counted, that the votes you cast were properly recorded and that election results were properly counted. It would be very straight forward to construct an electronic voting system that would do this, but the problem would be trust and security. You should be able to log in or go to your County Clerk's office and learn definitively how your vote was recorded. You should be able to validate the accuracy.
Even if you could do this, what to do next is another problem. Blockchain has an electronic dispute resolution and tracking mechanism that might prove useful to individuals and to voting watchdog groups. At this point, there is no perfect solution and the voting system feedback mechanism remains fundamentally weak.
Voting is not the only form of feedback in the Political System. Protest is another (see the analysis of Protest feedback here) and being used continuously in the US.
Wednesday, August 8, 2018
The Challenge of Digital Evidence
The Lawfare Podcasts, sponsored by the Brookings Institution, always seem to get below the surface and find more detail than is typically available in the electronic news media. The one above describes the problems digital service and media providers have responding to law enforcement requests (subpoenas) for digital evidence. I won't try to summarize the entire podcast but one topic caught my attention: the breadth of subpoenas.
Typically, a subpoena will include some specific evidence needed in a criminal case (e.g., a list of cell phone calls a suspect has made, etc.) but then goes on to add "...in addition to all other relevant evidence." Of course this is the problem: who decides what is relevant and how much data can a company be compelled to provide.
Compare the digital search to a typical warrant. Law enforcement breaks into an office with a warrant. The warrant lists specific evidence e.g., documents and physical evidence. If law enforcement finds other relevant evidence of criminal activity, that evidence is also swept up in the search. If they are looking for emails, can they ask for a backup of the company's entire email system in case some other employee (not the target of an investigation) forwarded evidence while also adding incriminating comments? Challenges to the scope of a digital subpoena can tie up searches to the point that data begins to disappear.
The podcast discusses a proposal to have a central clearing house that can broker these kinds of problems. I would add one other suggestion. If you listen carefully to the podcast, the discussion proceeds at a very high level with few specifics about the typical kinds of searches law enforcement might execute. Basically, prosecutors and law enforcement do not want to miss something that might be relevant. This is the mentality of the system.
After 9/11 the Department of Homeland Security (DHS) explored ways to share more information between local law enforcement, the FBI and DHS. "Connecting the dots" became a priority. But it was impossible to get the Federal government to specify what they wanted. Local Law Enforcement did not feel comfortable sharing it's raw data which included names of citizens never charged with any crime e.g., witnesses. After much argument and miscommunication, DHS admitted they "wanted everything" and really just wanted to do Google searches on every bit of information they could store centrally. This is the mentality of the system.
My suggestion is for the central clearing house to work on specifying routine searches e.g., cell phone call records, and move on from there to digital financial records, personnel records, customer data, etc. Digital service and media providers can then work on the programming necessary to meet these requests in a routine way. This might be easier than it sounds because enterprise software providers e.g., Oracle, Microsoft, IBM, etc. can build these searches into commercial systems that are used by most all large companies.
At some point in the process of building search requirements we will get to unreasonable and ill-defined requests basically for everything. I cannot predict what will happen at this point but judges might have to decide that some requests cannot be met. The central clearing house could be an unbiased third party assuring the judicial system that a request has gone too far.