top of page
  • Writer's pictureScott Scheferman

Transhackerism: Coined in 2016

Updated: Jul 28, 2020

I am going to coin a new term today:  Transhackerism

"The belief or theory that hackers must evolve beyond their current physical and mental limitations, especially by means of science and technology, and namely by leveraging, breaking, and protecting artificial intelligence systems" -me, 2016

I just witnessed an IR consultant canvas 24,000+ pieces of (previously undiscovered) malware found in an environment of 10,000 hosts, in just a few hours' work. Identifying, classifying, and even performing analysis on a dozen or so critically malicious files, and reconstructing everything that happened on those hosts, and tying it back to user account profile activity anomalies. In just a few hours' time, the human was able to leverage AI in order to identify high confidence pivot points from which to perform additional expert queries and other automated tasks. Without AI, this would have taken weeks or months, or for some organizations... would never actually have been able to be completed for raw lack of resources and expertise. In fact, even the best hacker ninjas on the planet wouldn't be able to accomplish this, without the insight and velocity provided by artificial intelligence in this context.

I do a lot of public speaking and end up 'talking' about this stuff a lot...but it isn't while conversing about it that I learn and gain perspective on just what is happening around us. It's in the field, day in and day out, on diverse networks, solving real problems. Hackers, and hunters, are transcending what we've collectively been to date, and we are doing so out of sheer necessity. In the past, we human security professionals have identified ourselves as "the experts with the wisdom, gut, experience and skills" that make all the difference when it comes to attacking or defending. We've always felt that being the smartest person in the room was an advantage of some sort. Or that breaking things was novel unto itself. We've been a lot of things over the last 30+ years, but my fellow hackers and hunters, I think we need to look just slightly ahead and realize... we need to evolve our perspective, our identity, and re-define hacker wisdom itself, if we are going to still hack the planet and save the world.

What we need to embrace, is Transhackerism. This will require even more humility than we've already mustered. Even more patience and determination than we're already known for. Even more thinking outside the box, than where our minds already dwell. We'll need to break systems of systems where if/then statements are the variables, and hex is just 'noise'. We'll need to be able to answer questions for which we literally don't have answers for today.

Right now we are lagging behind other industries. Maybe a few dozen papers written on Adversarial Machine Learning here and there. And a few new (very awesome) tools released at BlackHat recently, too. (see:, and Cylance's Paul Mehta's new tool, Ablation ( as examples).

But when you consider that for the first time, AI has successfully beaten humans at everything from Go, to air to air fighter combat, to chatbots that fool 99% of us, and even (for the first time this June) the ability to discern inappropriate images faster than Facebook's entire population of 1.5B users is able to moderate. (In 2014, FB had around 100,000 human moderators on staff full time, to augment the crowdsourcing of its user base moderation). Yes, AI has arrived, and we are just at the tip of the spear.

To keep up with AI, we must as hackers, understand and break it just like we have done in the past, before our enemies do the same. We need to think 'ahead of ourselves' now, more than ever. AI has the uncanny ability to learn at a faster pace than any one human or community can:  it took only 72 hours for an AI model to beat even the best chess players in the world, starting from 'absolute zero knowledge' about the game of chess on hour zero. (Giraffe). And AI was able to predict and prevent even the most advanced threats: 18 months before the news of Sauron/Strider broke, the algorithms were able to autonomously predict, and prevent any of the files identified in Symantec's "Strider" report from running. In some cases (like Zcryptor), algorithms were able to prevent execution of binaries, six months before they were even compiled or the campaign was even conceived by the bad guys...

“We can only see a short distance ahead, but we can see plenty there that needs to be done.” - Alan Turing

At one point, a consulting outfit I worked with helped a victim, whose environment in just 48 hours, was 95% encrypted, falling over from a recent SamSam worm that spread via network shares, and had made its way in through a vulnerable JBoss server. And yet, for some reason, this industry takes small comfort in having reduced our mean dwell time down to under 200 days, from 450+ days just a few years ago?  When a tool like credcrack can give an attacker domain admin credentials back in 17 seconds... do we really care about a 200 day mean dwell time? The back half (post execution) of the cyber kill chain is hyper dynamic, and moves at the 'speed of computing' these days. We need to stop visualizing a 'threat actor' moving at the pace a red-teamer might. We need to stop thinking about threat actors, period. It's not like we are trying to counter a single human being when "they" move through the network...we are countering hundreds of tool and malware authors, hours of effort spent on automation, infinite knowledge transfer on TTPs including a myriad of uses for whitelisted tools like Powershell, massive big-data projects and credential re-use, and all manner of evasion techniques that are 'baked in' to common attack platforms and exploit kits.

So while today, we have endpoint protection that leverages AI in order to achieve 99% efficacy in predictive, autonomous prevention.... what we still need is to evolve when it comes to the back half of the cyber kill chain. We need transhackerism... to evolve well beyond basic anomaly detection engines, static analysis, etc. What will it take for us to 'solve' this much more complex space? What will it take to get to a place of being able to have machines act autonomously on our behalf? Most efforts to date have resulted in a sea of false positives, and still lack that key ability to actually take predictive action ahead of the next malicious behavior.

One thing it will take is the ability to protect the machine learning and guided AI models themselves over time. They will be attacked directly, and indirectly by feeding data through them to find weaknesses and "edge cases" that may get by. Or they will be attacked by poisoning the data sets used to create them over time. So how do we build in additional layers of anomaly detection in order to discover attacks against our AI systems? How will we architect this 'onion layer' of AI that protects AI?  How do we architect the ability for a machine to know when something "isn't right", and to not respond to a command or data input that it knows is counter to its purpose, design and mission?  Walking the dog a bit further out... how will we hold machines accountable for actions they take on their own? 

And on the flip side, will we need to also evolve as transhackers into offensive security measures that monitor, gauge and counter our enemy's own AI efforts? What will be the threshold at which it is morally justified to attack another system that we know is being design specifically for the purpose of attacking ours...especially when such an attack could cause loss of life or threaten human safety?  Will we get to a state of M.A.D. (mutually assured destruction) as the last deterrent? Will governments and private industry be able to architect standards quickly and accurately enough to keep up with the pace of disruption?

These are challenges that are truly "just ahead" of us. None of them will be solved by individuals. They must be solved by large groups of thinkers working together, using common frameworks, and new lexicon able to navigate the world that is upon us. So in future-classic transhackerism fashion, will we solve that challenge?

In the time it took to read this, millions of machines just got significantly smarter. Once our neural networks are optimized and quantum computing arrives, able to solve problems in one microsecond that take current supercomputers 10,000 years... well, let's all hope that Transhackerism has arrived prior to the singularity just around the corner. 

Scott Scheferman

Written in 2016, yet more relevant than ever. Small changes made to this version.

Partial photo of painting by Katy Rokit, with permission.

14 views0 comments
Post: Blog2_Post
bottom of page