One and a half years ago, I signed up for the Good Judgment Project, which is run by a team from University of Pennsylvania and UC Berkeley. The project is one of five competing in a forecasting tournament sponsored by IARPA. Its main objective is to “dramatically enhance the accuracy, precision, and timeliness of forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts”.
This post is part one of a series on (amateur) forecasting: First, how does the tournament work?
Every other week, the admins post new questions about world events to a password-protected website, and the system asks us to allocate probabilities to different outcomes. Most of the questions cover global financial markets (exchange rates, sovereign debt ratings), security-related events (“when will the Syrian rebels capture Aleppo?”) or electoral outcomes (“who will be the next president of Cyprus?”). The answers express probabilities as percentage points and must add up to 100% for every question. For questions about time frames, this often means there is a residual category (“event will not occur before X”); in the easier cases, answering is a matter of yes or no.
The whole thing is based on teamwork. Forecasters are put in groups and get updates on their colleagues’ forecasts. There are team discussion boards and, most importantly, we are supposed to briefly comment on every prediction we make. Being able to see your teammates’ reasoning means that there will often be a herd instinct to follow forecasters that have argued their case well. Also, the scoring is transparent (using Brier scores), so you can choose to discount forecasts made by the guy at the bottom of the team ranking.
So far the experience has been fascinating. First, the tournament exposes you to a variety of world events and interesting questions, which is good in itself. Yet it also tells you a lot about research, collaboration and self-reflection. It has become clear from the beginning that simply guessing and not communicating with your teammates does not get anyone far. By now I have a mental list of people whose forecasts I just ignore as they have proven to be quite inconsistent. Vice versa, there’s a strong motivation to help each other out among the more active, communicative participants in the group.
Most importantly, the organizers have done a lot to alert all participants to basic rules of Bayesian probability and the importance of updating forecasts when conditions change. While it is impossible to perfectly predict whether or not country A will invade country B before date X, you can still learn a lot about baseline assumptions for that kind of question. For me personally, the tournament has been a lesson in bias based on affections: My initial optimism regarding the Syrian rebels’ military advancement has clearly hurt overall performance…
If you would like to try your hand at forecasting in season 3 (starting in June 2013), sign up for the Good Judgment Project waiting list. There might also be free spots in the (less successful) competing teams.
Jan Ulfelder also posted about forecasting yesterday, although with a different focus. Nevertheless interesting: http://dartthrowingchimp.wordpress.com/2013/03/11/forecasting-politics-is-still-hard-to-do-well/
YES, I am interested in being selected as one of your responders for the 2014 – 2015 season.
I heard about this opportunity and the research you are conducting through this mornings NPR Radio report.
Thank you for considering my request.