App folly

Isabel Oakeshott is interviewed. We are no wiser. The ins and outs of the story of Conservative Government Ministers during the COVID pandemic lockdowns is a story that will be written a thousand times. Hectares of the social media landscape will repeat every embarrassing blunder and poorly thought-out assertion. These ins and outs need to be dissected but it’s not work for those tying to improve their mental health.

People who have had some exposure to British politics often love “Yes Minister”, the BBC series that overflowed with wit, twists and turns. It lifted the lid on the stumbling workings of Whitehall and the political class. At the time the series was made there were no mobile phones in every pocket and paper was still king. Civil servants carried bundles of files down endless corridors. This wood panelled and stuffy environment was a commonplace image.

Opening a file really meant getting a folder and putting numerous memos and reports in it. Staking it high with the record of decision-making for future generations of historians to dissect.

In the 1970s, the speed of communication was mitigated by the medium. When it came to paper trails, that was a relatively human speed. Typed up memos were rarely dashed off without a thought. Documents were released with an official stamp and multiple signatories.

Fast forward to the 2020s. Office desks appear totally different from the past, that is if one exists at all. Mobiles have concentrated super-fast digital communication tools into the palm of a hand.

That said, official and unofficial communication channels continue to play their part in the corridors of power. What is shocking, in the current news stories is just how much the unofficial communication channels seem to dominate.

Afterall, we are not taking about a release of official Government emails. It’s worth asking; why are Government Ministers using WhatsApp[1] so much? It’s a widely available commercial messaging application owned by the US company Meta.

Is the machinery of political governance getting so lax in the UK that we are behold unto a messaging mobile App over which we have no control what-so-ever? 

Globally, WhatsApp may have over 2 billion users but that’s no guarantee of its integrity. The system does get hacked. Ministers using unofficial communication channels as if they were totally within their control are foolish, unethical, and naïve, to say the least.


[1] https://en.wikipedia.org/wiki/WhatsApp

Just H

What is the future of Hydrogen in Aviation? Good question. Every futurologist has a place for Hydrogen (H) in their predictions. However, the range of optimistic projections is almost matched by the number of pessimistic ones.

There’s no doubt that aircraft propulsion generated using H as a fuel can be done. There’s a variety of way of doing it but, the fact is, that it can be done. What’s less clear is a whole mass of factors related to economics, safety and security and desirability of having a hydrogen-based society.

H can be a clean form of energy[1], as in its purest form the process of combustion produces only water. We need to note that combustion processes are rarely completely pure.

It’s an abundant element but it prefers to be in company of other elements. Afterall, the planet is awash with H2O. When H is on its own it has no colour, odour, or taste. In low concentrations, we humans could be oblivious to it even though there’s a lot of it in the compounds that make us up.

Number one on the periodic table, it’s a tiny lightweight element that can find all sorts of ways of migrating from A to B. Ironically, that makes it an expensive element to move around in commercially useable quantities. H is often produced far away from where it’s used. For users like aviation, this makes the subject of distribution a fundamental one.

Part of the challenge of moving H around is finding ways of increasing its energy density. So, making it liquid or pumping it as a high-pressure gas are the most economic ways of using it. If this is to be done with a high level of safety and security, then this is not going to come cheap.

There are a lot of pictures of what happens when this goes wrong.  Looking back at the airships of the past there are numerous catastrophic events to reference. More relevantly, there’s the space industry to look at for spectacular failures[2]. A flammable hydrogen–air mixture doesn’t take much to set it off[3]. The upside is that H doesn’t hang around. Compared to other fuels H is likely to disperse quickly. It will not pool on the ground like Kerosene does.

In aviation super strict control procedure and maintenance requirements will certainly be needed. Every joint and connectors will need scrupulous attention. Every physical space where gas can accumulate will need a detection system and/or a fail proof vent.

This is a big new challenge to aircraft airworthiness. The trick is to learn from other industries.

NOTE: The picture. At 13:45 on 1 December 1783, Professor Jacques Charles and the Robert brothers launched a manned balloon in Paris. First manned hydrogen balloon flight was 240 years ago.


[1] https://knowledge.energyinst.org/collections/hydrogen

[2] https://appel.nasa.gov/2011/02/02/explosive-lessons-in-hydrogen-safety/

 

To provoke

Social media provocateurs are on the rise. Say something that’s a bit on the edge and wait for the avalanche of responses. It’s a way of getting traffic to a site. The scientific and technical sphere has these digital provocateurs less than the glossy magazine brigade, but the phenomena is growing.

Take a method or technique that is commonly used, challenge people to say why it’s good while branding it rubbish. It’s not a bad way to get clicks. This approach to the on-line world stimulates several typical responses.

One: Jump on-board. I agree the method is rubbish. Two: I’m a believer. You’re wrong and here’s why. Three: So, what? I’m going to argue for the sake of arguing. Four: Classical fence sitting. On the one hand you maybe right on the other hand you may be wrong.

Here’s one I saw recently about safety management[1]. You know those five-by-five risk matrices we use – they’re rubbish. They are subjective and unscientific. They give consultants the opportunity to escalate risks to make new work or they give managers the opportunity to deescalate risk to avoid doing more work. Now, that’s not a bad provocation. 

If the author starts by alleging all consultants and managers of being manipulative bad actors that sure is going to provoke a response. In safety management there are four pillars and one of them is safety culture. So, if there are manipulative bad actors applying the process there’s surely a poor safety culture which makes everything else moot.

This plays into the discomfort some people have with the inevitable subjectivity of risk classification. It’s true that safety risk classification uses quantitative and qualitative methods. However, most typically quantitative methods are used to support qualitative decisions.

There’s an in-built complication with any risk classification scheme. It’s one reason why three-by-three risk matrices are often inadequate. When boundaries are set there’s always the cases to decide for items that are marginally one side or other side of a prescribed line.

An assessment of safety risk is just that – an assessment. When we use the word “analysis” it’s the supporting work that is being referenced. Even an analysis contains estimations of the risk. This is particularly the case in calculations involving any kind of human action.

To say that this approach is not “scientific” is again a provocation. Science is far more than measuring phenomena. Far more than crunching numbers. It includes the judgement of experts. Yes, that judgement must be open to question. Testing and challenging is a good way of giving increased the credibility of conclusions drawn from risk assessment.


[1] https://publicapps.caa.co.uk/docs/33/CAP795_SMS_guidance_to_organisations.pdf

Artificial intelligence (AI) transition

There’s much that has been written on this subject. In fact, for a non-specialist observer it’s not so easy to get to grips with the different predictions and views that are buzzing around.

There’s absolutely no doubt that Artificial intelligence (AI) will change every corner of society. Maybe a few living off-grid in remote areas will remain untouched but every other human on the planet will be impacted by AI. Where there’s digital data there will be AI. Some will say this brings the benefits of AI into our everyday and others herald a pending nightmare where we lose control.

Neither maybe totally on the money but what’s clear is that this is no ordinary technological transition. Up until now, the software we use has been a tool. Built for a purpose and shaped by those who programmed its code. AI is not like that at all. It’s a step beyond just a tool.

Imagine wheeling a hammer that changed shape to suite a job, but the user had no control over the shape it took. How will we take to something so useful but beyond our immediate control?

In civil aviation, AI opens the possibility of autonomous flight, preventive maintenance, and optimal air traffic management. It may work with human operators or replace them in its more advanced future implementations. Even the thought of this causes some professional people to recoil.

I’ve just finished reading the book[1] of a former Google chief officer, Mo Gawdat and he starts off being pessimistic about the dangers of widespread general AI. As he moves through his arguments, the book points to us as the problem and not the machines. It’s what we teach AI that matters rather than the threat being intrinsic to the machine.

To me, that makes perfect sense. The notion of GIGO[2] or “Garbage In, Garbage Out” has been around as long as the computer. It does, however, put a big responsibility on those who provide the training data for AI or how that data is acquired.

Today’s social media gives us a glimpse of what happens when algorithms slavishly give us what we want. Anarchic public training from millions of hand-held devices can produce some undesirable and unpleasant outcomes.

It maybe that we need to move from a traditional software centric view of how these systems work to a more data centric view. If AI starts with poor training data, the outcome will be assuredly poor.

Gawdat dismisses the idea that general AI can be explainable. Whatever graphics or equations that may be contrived they are not going to give a useful representation of what goes on inside the machine after a period of running. An inability to explain the inner working of the AI maybe fine for non-critical applications but it’s a problem in relation to safety systems.


[1] Mo Gawdat. Scary Smart, the future of artificial intelligence and how you can save our world. 2021. ISBN 978-1-5290-7765-0.

[2] https://techterms.com/definition/gigo

Fatal accident in Nepal 3

The air transport year started badly. A Yeti Airlines twin-engine ATR 72-500[1] aircraft plunged into a gorge as it was approaching Pokhara International Airport (PKR) in Nepal.

Singapore’s Ministry of Transport (MOT) is supporting Nepalese authorities.

The latest news is that the aircraft’s Flight Data Recorder (FDR) and Cockpit Voice Recorder (CVR) have been replayed. It is reported that the analysis of the FDR and CVR data shows that the propellers of both engines were feathered during approach.

It is not known if this was due to the actions of the crew or a technical fault.

The investigation continues.

The propellers on this aircraft type have pitch control of their blades. The pitch of the blades can be changed to the “feather” position (approximately 90 degrees). Feathered blades reduces the drag that would occur in the event of an engine shutdown.

This event occurring while the aircraft is slowing on approach will have an impact on the aircraft’s air speed. Monitoring air speed on approach is vital.

The suspicion that the aircraft may have stalled remains one theory.

The normal actions required on an approach are called up on a checklist. 

Example: Here is a video of an ATR 72-500 landing.

Notice the pilots’ hands at 4:57 minutes in.

An incident involving an aircraft of the ATR 72 type on the way from Stockholm to Visby[2] is interesting but may not be relevant in the Yeti Airlines case.


[1] https://skybrary.aero/aircraft/at75

[2] https://www.havkom.se/en/investigations/civil-luftfart/tillbud-med-ett-luftfartyg-av-typen-atr-72-pa-vaeg-fran-bromma-till-visby

Rules

Let’s be controversial. Principle Based Rules could be retitled Hypocrisy Based Rules.

Now, I’ve already caused confusion because most consumers, or users of services will not have a clue what I’m talking about. The way rules are put together is not an everyday topic for conversation. Even if, in our complex society, this subject is vitally important.

Listening to the daily news it’s clear there’s been a break down between British Gas and its regulator and some vulnerable customers. Practices undertaken by a British Gas sub-contractor have shocked people. Breaking into people’s homes should not be normal business practice.

Yet, these real-world actions happened, and they sharply go against the “principles” of the energy supplier[1] and its regulator. So, do the rules that exist produce behaviours that fit with public expectations? This is the “how long is a piece of string” question. In other word the perception of the rules, such as they are, can be favourable but when it comes to implementation it’s another story completely.

Sadly, the defensive reactions of both energy supplier and regulator are to frame the whole problem as one of first not knowing then discovery, investigation, and corrective action. This is not bad in of itself, but it’s the most basic kind of reactive response that can be expected. It says to the consumer, we will wait for an influential spokesperson[2] to highlight a failing and then respond to pressure.

Has British Gas captured its regulator? That is to convince them that everything is hunky-dory and maybe convinced themselves it’s hunky-dory too but at the same time not bother to look at customer facing bad practices?

Hence my coining the notion of Hypocrisy Based Rules. I’m not saying for one moment that regulatory rules can be written that have no gaps, inconsistencies, or avenues for “creative compliance”. It can be advantageous to the consumer that an energy suppler has a degree of freedom on how they comply with rules.

What was missing is that regard for the need for constant vigilance. Reports suggest that British Gas’s sub-contractor undertook behaviour that did not fulfil regulatory goals.

Although it’s long in the tooth, this quote from an Irish statesman has resonance:

The condition upon which God hath given liberty to man is eternal vigilance. John Philpot Curran[3]

In this simple sentence “liberty” can be replaced with safety, security, prosperity, and honesty. It’s often been reworked.


[1] https://www.centrica.com/about-us/people-culture/our-code

[2] https://www.thetimes.co.uk/article/british-gas-prepayment-meter-debt-energy-bills-investigation-wrgnzt6xs

[3] https://www.britannica.com/biography/John-Philpot-Curran

Apprenticeships

What do you think are the reasons behind the overall decline in engineering apprenticeship starts in recent years? We are particularly interested in understanding more about supply and demand.

What do you think are the reasons behind the overall decline in engineering apprenticeship starts in recent years? We are particularly interested in understanding more about supply and demand.

Image. It persists even now. In fact, the paper[1] that asks these questions has images of spanner turning. It’s so easy to pick royalty free pictures that pop-up from search engines searches. These images show mechanics in blue overalls. Don’t get me wrong, this is not the least bit disrespectful of spanner turning.

A deep cultural memory persists. It has multiple elements. You could say, in part, industrialisation, still conjures up images of dark satanic mills contrasted with grand country homes of a class of business owners. Basically, dirty, and clean as two key words.

The Victorians did a great deal to both elevate engineering personalities, like Brunel[2], but to hold them as different or apart from the upper middle-class society that the fortunate aspired to join. Those who forged the prosperity of the age had to work hard to be accepted in “society”.

Today, it makes no difference that’s it’s American, popular comedies like “The Big Bang Theory[3]” entertain us immensely but pocket the “nerd” as eccentric, peculiar and unfathomable. I admit this is attractive to a proportion of young people but maybe such shows create exclusivity rather than opening people’s eyes to possibilities.

Having Government Ministers standing=up can calling for Britan to become a version of Silicon Valley doesn’t help. Immediately, that signal is heard from those in authority, young people switch “off”. To boot, the image conquered up is a whole generation out of date. We have the Windows 95 generation telling the iPhone generation what’s the best direction to get to the 2030s.

Here’s a proposition – you must see yourself as an “engineer” to become an engineer. That can be said of a whole myriad of different professions. Each with a common stereotype. Look at it the other way. If you cant’t see yourself as a person who can shape the future, it isn’t likely you will choose engineering.

My observation is that we need to get away from too many images of activities. In other words, this is an engineer at work. This is what they do. This is what they look like. What we need to address is the touchy-feely stuff. Let’s consider how young people feel about the world they have inherited from my generation.

A high level of motivation comes from the wish to make changes and the feeling that it’s possible to make changes. That the skills picked-up as an apprentice will help you shape the future. Engineering is part of making a better world.

[My history is that of an Engineering Industry Training Board (EITB) apprentice who started work in 1976.]


[1] https://www.engineeringuk.com/media/318763/fit_for_the_future_knight_and_willetts_apprenticeship_inquiry_euk_call_for_evidence.pdf

[2] https://en.wikipedia.org/wiki/Isambard_Kingdom_Brunel

[3] https://www.imdb.com/title/tt0898266/

Cyber

Now, where did that word come from? My earliest recollection is the scariest adversary of Dr Who. The cybermen hit the small screen back in 1966. This fiction of an amalgam of machine and human is particularly scary. This was the fabled monster that drove young children to hide behind the sofa. The BBC hasn’t given-up on this character. Somehow, these fictional metal-men are almost certainly going to retrun to run amok and devastate humanity.

Patrick Stewart being assimilated by the Borg is a mega dramatic cliff-hanger. The Cylons[1] obliterating the colonies sent humanity on a runaway across the endless expanse of space. The indestructible killing machine of the Terminator was a huge box office success. There’s a recuring theme. In the popular imagination the combination of machine and human is thought of as fundamental threat. The enemy is the machine that transforms human mind and body into a single-minded demon intent on mischief or destruction.

By this reckoning you might think that “cyber security” was a Robocop like police force committed to routing out bad cyborgs. Yet, that’s nothing like the common usage of the term. There’s a certain threat, and it does involve digital systems and humans. However, in this century they are not yet[2] wandering around doing unpleasant things to all and sundry.

Strangely enough the term “cybernetics” has been around for a long-time. It’s not about robots. It came into being before modern digital systems and the silicon revolution were kicked-off. In part, cyber was coined as a way of expressing the almost magical qualities of feedback processes. It was wide-ranging, in that this term described natural as well as mechanical systems. In the words’ origins there was nothing sinister or chilling implied.

In 2023, “cyber security” is how we reduce the risk of cyber-attack[3]. Not a great description but let’s just say the notion is dealing with a recognised threats in digital systems.

This wasn’t something that was commonplace until the Personal Computer (PC), its software and the INTERNET connected billions of people. The normal human limitations that constrained our sphere of influence have been extended across the globe. Now, bad actors intentionally doing bad things can be based anywhere on the planet.

Since they are human actors, they are mighty creative and inventive. These people are a constant threat, like the Borg[4] that adapts and modifies what they do so as to counter any actions to defeat them. Our defence can’t be as that of the Battleship Galactica, disconnection, we are going to have to find another way. Unlike some threats there’s little chance this one will ever go away.


[1] https://ew.com/gallery/battlestar-galactica-12-things-you-need-know-about-cylons/

[2] https://www.bostondynamics.com/

[3] https://www.ncsc.gov.uk/section/about-ncsc/what-is-cyber-security

[4] https://intl.startrek.com/database_article/borg

Still learning leasons

Mobility has transformed society. By land, by sea or by air the world we see around us has been shaped by the technology that has enabled us to move people, goods, and services. Aviation, the youngest means of everyday transport, has radically transformed society in just over a century.

Demand for air transport is linked to economic development and at the same time air transport is a driver in an economy. Nearly all States work to encourage the growth of aviation in one form or another. All States acknowledge the need for the stringent regulation of activities in their airspace.

4.5 billion people moved around the globe by air. Well, that is until the COVID pandemic struck[1]. Even so, there’s an expectation that global air traffic levels will start to exceed those of 2019 when we start to get into 2025 and beyond.

One quote, among many, sums up the reason for the safety regulation of flying, and it is:

“Aviation in itself is not inherently dangerous. But to an even greater degree than the sea, it is terribly unforgiving of any carelessness, incapacity or neglect.”

[Captain A. G. Lamplugh, British Aviation Insurance Group, London. 1930.]

Here the emphasis is on aviation safety and security as the top considerations. In fact, ask an airline CEO of the number one priority of their business and that’s likely how they will answer, if on the record. Much of that open expression will be sincere but additionally it’s linked to the need to maintain public confidence in the air transport system.

We need to remember that aviation had a shaky start. Those magnificent men, and women in their flying machines were adventurous spirits and born risk takers. That is calculated risk takers. Few of them lasted long unless they mastered both the skill and science of flying.

In the post war era, improvements in aviation safety have been dramatic. As the number of hours flown and the complexity of aircraft has grown so has the level of flight safety. Aviation has been an uncompromising learning machine. A partnership between States and industry.

Sadly, in part, the framework of international regulation we may now take for granted has been developed because of lessons learned from accidents and incidents, many of which were fatal.


[1] https://www.icao.int/sustainability/Documents/COVID-19/ICAO_Coronavirus_Econ_Impact.pdf

AI2

There’s not just one form of Artificial Intelligence (AI). This group term hides a great panoply of different configurations, shapes, and forms of applications.

One of the most impactful applications is that of machine learning or expert systems. It’s where we go beyond a conventional computer’s ability to store and manipulate information against set rules. It’s where the machine has the capability to learn new ways of interpreting information and thus becomes different every day of operation from the day it was switched on. That’s a bit vague but it captures the essence of moving from deterministic to non-deterministic systems.

In all this we do presuppose that such complex systems are in the hands of able and highly illiterate users who understand what they are doing in training that learning machine. There’s debate about how bias in algorithms can produce unintended consequences. In addition, a reliable and trustworthy machine can be trained in a way that embeds errors and biases too[1].

Just as a child picks up the bad habits of a parent, so “intelligent” machines can learn from pilots, controllers and engineers who may have less than optimal ways of undertaking tasks. This Human-AI interplay is likely to become a major area of study. As the topic of Human Factors is itself a large body of material.

Already with the debate on social media it is all too obvious that the aviation community has a wide range of views on the use of AI. All the way from utter rejection, or scepticism deeming such systems as “unsafe” to advocates who profess only the benefits and merits of such systems.

Clearly, both extreme ends of the spectrum of professional views don’t help much. I don’t think that the promoters of AI want to see blind overreliance on it. Equally, surly even ardent sceptics can see virtue in making the best use of the accumulated knowledge that is available.

I can foresee a system of systems approach. With my parent and child analogy, from time to time a child will ask a question that is blunt and to the point. A question that demands a straightforward answer. This can be uncomfortable but hits out at biases and bad habits.

In aircraft systems there are boundaries that must be respected. The physics of flight dictate that going beyond those boundaries is generally not good for life and limb. So, a system programmed to question an expert system, one AI questioning another AI, or even question its trainer, is not beyond the realms of possibility. It might even be a good idea.


[1] https://www.nature.com/articles/s41746-022-00737-z