Can’t get enough Thomas Dolby. As usual, reading the Google News science feed.
A. Gelatinous snails ‘fly’ through water, say scientists. Cool, and all, but I mostly mention this because The Gelatinous Snails would make a good band name.
B. Seems Neanderthals and Homo Sapiens got jiggy wit’ it, at least outside Africa.

Always wondered why the idea that very closely related subspecies of Man interbred is treated as anything but obvious, given the behavior of people today, and, indeed, the behavior of the vast bulk of sexually-reproducing species. What would be news is if, somehow, it were shown that such dalliances *didn’t* take place.
Somebody was invading somebody’s range, or maybe invading nobody’s range at the same time. As a general, Darwin-based rule: if you find something living someplace, that something stands at the end of a long line of successful natural selection for that particular environment. You could do worse, biology permitting, than to bet your progeny’s survival on mixing it up with those genes if you plan on staying in that environment.
At any rate, that Artist’s Impression of a Neanderthal is one fine chunk of manhood, to judge by the young lady in the picture.
C. Here (item C.) I make fun of silly anthropological/sociobiological overreach. Here the Statistician to the Stars beats it to death with a stick. Silly Just So stories!
D. $5 million prize for artificial intelligence targets ‘dystopian conversation’ Well. Some wit recently said that his fear of AI was somewhat diminished when he remembers that its highest state achieved so far is probably the auto-correct on his smartphone.
What I fear is not that AIs will get so smart they decide they don’t need us – very unlikely on a number of fronts – but rather that more and more power to screw things up will be given to machines that just execute flawed, even stupid, algorithms concocted by hubris-ridden geeks. “Oops!” I hear some sweaty geek sheepishly saying, “the AI evidently concluded that shutting down all the powerplants and draining all the reservoirs was the thing to do based on a misinterpretation of a fish tank maintenance protocol. My bad.”
E. Regarding the foregoing: Scientists Want to Teach Robots Morality by Reading Them Bedtime Stories. OK, might want to adjust that headline writer’s meds. Scientists want a lot of things, like pretty women caring about their massive brains and rivers flowing with Mountain Dew. Don’t mean they get them. Getting a wee bit ahead of ourselves, here.
I agree with your concerns about AI. It’s far more likely that we humans will make a programming error than that machines will consciously choose to do us harm.
I think we have to keep reminding ourselves that all this AI stuff still depends on people writing code and building hardware, and that those activities are very limited and error prone. The idea that we can just set the stage for an ’emergent property’ that we’d like to see and then, boom, it ’emerges’ – that’s not at all obvious or inevitable upon a moment’s reflection. In fact, I’m trying to think of a planned, man-made example where that’s how it worked out. Sure, we build things all the time to exploit properties we already know about or at least guess at (thinking of the Wright Bros) to do stuff never done before. But, like cargo cultists, AI enthusiasts just suppose that intelligence is something that arises inevitably once you get the material pieces *you already understand* in place. But until we do it, it’s impossible to be sure that we understand what those material prerequisites are, let alone whether they are in themselves enough.
With respect to Neanderthals and Homo Sapiens “getting jiggy”, have you read Robert Sawyer’s “The Neanderthal Parallelax (sp?)”?
No, I have not. Sounds interesting, but also like a book I’d throw against the wall every once in a while.
Your “Did Neanderthals have a soul?” essay is interesting as well.