Using ChatGPT to Deobfuscate Malicious Scripts, (Wed, Mar 13th)

This post was originally published on this site

Today, most of the malicious scripts in the wild are heavily obfuscated. Obfuscation is key to slow down the security analyst's job and to bypass simple security controls. They are many techniques available. Most of the time, your trained eyes can spot them in a few seconds but it remains a pain to process manually. How to handle them? For soe of them, you have tools like[1], developed by Didier, to convert classic encodings back to strings. Sometimes, you can write your own script (time consuming) or use a Cyberchef recipe. To speed up the analysis, why not ask some help to AI tools? Let's see a practical example with ChatGPT.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.