In last week’s On The Media, we heard from Nick Diakopoulos of the Columbia Journalism School on the state of bots writing news before anyone else. But the topic was on earthquakes, the devastating seismic activity that we try to detect with technology. While humans are the ones looking through data that computers have formulated, we would still be lost without them. The idea of magnitude is calculated by a computer, and I see no reason why (if possible) that computer couldn’t release the information in a dry, if dense way.
In class we talked about the problems of twitter and the quick release of information that happens to go un-fact-checked. Where humans type news into their twitter accounts, humans are the ones programming this auto-reporting software. Of course there is definitely room for error, but it needs to be taken in different contexts. An earthquake’s magnitude (or anything scientifically measured) is less debatable than news about a local election. What the OTM piece does bring up is that bots can not only be incorrect, but can also autocorrect misinformation. While now there are two possibilities for the piece to be wrong (the human and the bot), the bot could be a cheaper version of a human fact-checker, and could save that company money in the end.
A bot writing with style and floridity doesn’t seem likely in the near-future, but if it is possible, I say pursue it. But with everything, there need to humans behind the scenes controlling it. If the writing of the bots is dry, then humans will add to it. But if information needs to be sent immediately, a bot seems like a superior alternative to human reporting. Once again, it depends on the matter-at-hand. I wouldn’t like to open to the obituaries and see that people have outsourced the writing to bots. “John Stamos is dead. His ex-wife is not. He died in his sleep.”
One thing that I did not like that Diakopoulos mentioned is that “by telling people where the data came from, it gives them an extra sort of signal about the credibility or the trustworthiness of this thing.” Let’s not say that computers are the almighty beings of society. I don’t think that we should believe a bot more so over a human, as once again, a human made that bot. We should acknowledge it’s swiftness in reporting and convenience. It is cheaper and less involved, it is economical. But so often is technology associated with preciseness that at this stage in auto-reporting, to develop a preference for bots will result in a certain manipulation of news by those that create the bots.