Now, there’s an activity with two humans in the loop. Given the physics involved the goalkeeper should be beaten every time. Well, I’m saying that assuming a high level of expected performance on the part of the footballer taking the penalty. I guess that’s why we are often critical when they miss. In the last few weeks there have been more than a few examples to watch.
What we know is that football penalties are much more than mechanical actions and reactions. However, there’s a degree of mythology about the inevitability of human factors taking control of the outcome: goal or no goal. I’d like to think that there’s an ever-shifting blend of what physics does to the ball and what the human does. Is it always possible to predict the slipperiness of a spinning ball traveling at speed that is then touched by the fingertip of a goalkeeper?
What if the footballer taking the penalty, was an “intelligent” machine. That is a machine with a sensor array and computational capability that far exceeded normal human performance. Such advance automation could calculate the most probable reaction of a goalkeeper based on history and the immediate movements they make right up to the last millisecond before the ball is struck.
Assuming the machine was limited in term of the force it can apply to the ball, it could still adjust its actions as soon as any new information was available. I’m not saying the outcome will always be better for the machine football striker. However, it could reduce the scope for error and randomness to dictate what finally happens.
So, with that argument, in aviation, I’m saying it’s not right to say that Single Pilot Operation will always be worse than two crew operations. Don’t get me wrong, those people aggressively advancing the idea that the intelligent machine will always be better than a human are missing something too.
One thing that highly capable automation could have to bring to the party is not only early detection and diagnosis of problems but a massive library of stored experience. How we embed and constantly update that flight experience is an almighty challenge.
Afterall, the dread in aviation is knowledge with hindsight. It takes the form: “You should have known. Why did you let this incident happen?”
I’m now tempted to think of a Star Trek analogy. Every second an aircraft of a type is flying, experience of its operation is being accumulated. If there are hundreds of a type flying at any moment across the globe, that’s a lot of data to collect and absorb and think about before acting.
The fictional and scarry Borg are cybernetic creatures linked by a hive mind and they know a thing or two about assimilation. Granted that’s farfetched as analogies go but my point is that I believe we are generations away from that kind of capability. Not only that, just as humans fail so any such “intelligence” designed by humans will fail to.