Self learning algorithms sound fine but how you you trace the rationale for decisions made when something possibly safety critical goes wrong?