Tag Archives: publications

Links: Drones; Forecasting; Ranking Researchers; Surveillance Logic

A combat drone, via Wikimedia commons
A combat drone, because that’s the most photogenic of all topics covered here today… (Wikimedia commons)

I hope you’re having a great week so far! My fellow bloggers have other obligations, so you’ll have to tolerate my incoherent link lists for the time being…

At the Duck of Minerva, Charli Carpenter makes a crucial point regarding the debate on military drones (emphasis added):

In my view, all these arguments have some merit but the most important thing to focus on is the issue of extrajudicial killing, rather than the means used to do it, for two reasons. First, if the US ended its targeted killings policy this would effectively stop the use of weaponized drones in the war on terror, whereas the opposite is not the case; and it would effectively remove the CIA from involvement with drones. It would thus limit weaponized drones to use in regular armed conflicts that might arise in the future, and only at the hands of trained military personnel. If Holewinski and Lewis are right, this will drastically reduce civilian casualties from drones.

I’d like to recommend a couple of links on attempts to forecast political events. First, the always excellent Jay Ulfelder has put together some links on prediction markets, including a long story in the Pacific Standard on the now defunct platform Intrade. Ulfelder also comments on “why it is important to quantify our beliefs”.

Second (also via Ulfelder), I highly recommend the Predictive Heuristics blog, which is run by the Ward Lab at Duke University. Their most recent post covers a dataset on political conflict called ICEWS and its use in the Good Judgment Project, a forecasting tournament that I have covered here on the blog as well. (#4 of my series should follow soon-ish.)

A post by Daniel Sgroi at VoxEU suggests a way for panelists in the UK Research Excellence Framework (REF) to judge the quality of research output. Apparently, there is a huge effort underway to rank scholars based on their output (i.e., publications) — and the judges have been explicitly told not to consider the journals in which articles were published. Sgroi doesn’t think that’s a good idea:

Of course, economists are experts at decision-making under uncertainty, so we are uniquely well-placed to handle this. However, there is a roadblock that has been thrown up that makes that task a bit harder – the REF guidelines insist that the panel cannot make use of journal impact factors or any hierarchy of journals as part of the assessment process. It seems perplexing that any information should be ignored in this process, especially when it seems so pertinent. Here I will argue that journal quality is important and should be used, but only in combination with other relevant data. Since we teach our own students a particular method (courtesy of the Reverend Thomas Bayes) for making such decisions, why not practise what we preach?

This resonates with earlier debates here and elsewhere on how to assess academic work. There’s a slippery slope if you rely on publications: in the end, are you just going to count the number of peer-reviewed articles in a CV without ever reading any of them? However, Sgroi is probably right to point out that it’s absurd to disregard entirely the most important mechanism of quality control this profession has to offer, despite all its flaws.

Next week, the Körber-Stiftung will hold the 3rd Berlin Foreign Policy Forum. One of the panels deals with transatlantic relations. I’m wonder if any interesting news on the spying scandal will pop up in time. Meanwhile, this talk by Dan Geer on “tradeoffs in cyber security” illustrates the self-reinforcing logic of surveillance (via Bruce Schneier):

Unless you fully instrument your data handling, it is not possible for you to say what did not happen. With total surveillance, and total surveillance alone, it is possible to treat the absence of evidence as the evidence of absence. Only when you know everything that *did* happen with your data can you say what did *not* happen with your data.

IR Journals Off the Beaten Track

img_capa_v1colombiaturkey-ir-journal-coverIRAP

Whenever you write an acacemic paper – no matter whether it is for school, for a journal or as part of your thesis – you are in need of literature. You need to find other papers or books to read and to cite to show that you know what you and others are talking about. But where do you look for this literature? No matter whether you start your search at Google Scholar, your local university library or the Web of Knowledge (WoK), you often end up following a beaten track. And that track most oftenly leads through US publishing houses, authors, and journals.

If your are interested in some alternative views, here are some links to journals that might help you leave that path at least once in a while:

Some of these journals are actually listed in the Social Science Citation Index and you might want (or have) to access it through the Web of Knowledge (given that your institution has access to the WoK).

This list is probably not exaustive and it ignores non-US journals from Europe and Canada. But it introduces publications of IR communuties that are probably farest off the beaten  track and it represents what I have collected over the years as part of my own research on post-Western IR. If you know of other journals or good alternative databases, please share these with us!