Skyline Advisor Pro Proactive Findings – February Edition

This post was originally published on this site

Tweet VMware Skyline releases new Proactive Findings every month. Findings are prioritized by trending issues in VMware Support, issues raised through Post Escalation review, Security vulnerabilities, and issues raised from VMware engineering, and customers. For the month of January, we released 20 new Findings. Of these, there are 16 Findings based on trending issues, 2 … Continued

The post Skyline Advisor Pro Proactive Findings – February Edition appeared first on VMware Support Insider.

Let Your IPv6-only Workloads Connect to IPv4 Services

This post was originally published on this site

Today we are announcing two new capabilities for Amazon Virtual Private Cloud (VPC) NAT gateway and Amazon Route 53, allowing your IPv6-only workloads to transparently communicate with IPV4-only services. Curious? Read on; I have details for you.

Some of you are running very large workloads involving tens of thousands of virtual machines, containers, or micro-services. To do so, you configured these workloads to work in the IPv6 address space. This avoids the problem of running out of available IPv4 addresses (a single VPC has a maximum theoretical size of 65,536 IPv4 addresses, compared to /56 ranges for IPv6, allowing for a maximum theoretical size of 2^73 -1 IPv6 addresses), and it saves you from additional headaches caused by managing complex IPv4-based networks (think about non-overlapping subnets in between VPCs belonging to multiple AWS accounts, AWS Regions, or on-premises networks).

But can you really run an IPv6 workload in isolation from the rest of the IPv4 world? Most of you told us it is important to let such workloads continue to communicate with IPv4 services, either to make calls to older APIs or just as a transient design, while you are migrating multiple dependent workloads from IPv6 to IPv4. Not having the ability to call an IPv4 service from IPv6 hosts makes migrations slower and more difficult than it needs to be. It obliged some of you to build custom solutions that are hard to maintain.

This is why we are launching two new capabilities allowing your IPv6 workloads to transparently communicate with IPv4 services: NAT64 (read “six to four”) for the VPC NAT gateway and DNS64 (also “six to four”) for the Amazon Route 53 resolver.

How Does It Work?
As illustrated by the following diagram, let’s imagine I have an Amazon Elastic Compute Cloud (Amazon EC2) instance with an IPv6-only address that has to make an API call to an IPv4 service running on another EC2 instance. In the diagram, I chose to have the IPv4-only host in a separate VPC in the same AWS account, but these capabilities work to connect to any IPv4 service, whether in the same VPC or in another AWS account’s VPC, your on-premises network, or even on the public internet. My IPv6-only host only knows the DNS name of the service.

NAT64 DNS64 beforeHere is the sequence happening when the IPv6-only host initiates a connection to the IPv4 service:

1. The IPV6 host makes a DNS call to resolve the service name to an IP address. Without DNS64, Route 53 would have returned an IPv4 address. The IPv6-only hosts would not have been able to connect to that IPv4 address. But starting today, you can turn on DNS64 for your subnet. The DNS resolver first checks if the record contains an IPv6 address (AAAA record). If it does, the IPv6 address is returned. The IPv6 host can connect to the service using just IPv6. When the record only contains an IPv4 address, the Route 53 resolver synthesizes an IPv6 address by prepending the well-known 64:ff9b::/96 prefix to the IPv4 address.

For example, when the IPv4 service has the address, Route 53 returns 64:ff9b::ffff:22cf:fa3e.

IPv6 (hexadecimal) : 64:ff9b::ffff: 22 cf fa 3e
IPv4 (decimal) : 34 207 250 62

64:ff9b::/96is a well-known prefix defined in the RFC 6052 proposed standard to the IETF. Reading the text of the standard is a great way to fall asleep rapidly to learn all the details about IPv6 to IPv4 translation.

2. The IPv6 host initiates a connection to 64:ff9b::ffff:22cf:fa3e. You may configure subnet routing to send all packets starting with 64:ff9b::/96 to the NAT gateway. The NAT gateway recognizes the IPv6 address prefix, extracts the IPv4 address from it, and initiates an IPv4 connection to the destination. As usual, the source IPv4 address is the IPv4 address of the NAT gateway itself.

3. When the packet response arrives, the NAT gateway repopulates the destination host IPv6 address and prepends the well-known prefix 64:ff9b::/96 to the source IP address of the response packet.

Now that you understand how it works, how can you configure your VPC to take advantage of these two new capabilities?

How to Get Started
To enable these two capabilities, I have to adjust two configurations: first, I flag the subnets that require DNS64 translation, and second, I add a route to the IPv6 subnet routing table to send part of the IPv6 traffic to the NAT gateway.

To enable DNS64, I have to use the new --enable-dns64 option to modify my existing subnets. In this demo, I use the modify-subnet-attribute command. This is a one-time operation. I can do it using the VPC API, the AWS Command Line Interface (CLI), or the AWS Management Console. Notice this is a subnet-level configuration that must be turned on explicitly. By default, the existing behavior is maintained.

aws ec2 modify-subnet-attribute --subnet-id subnet-123 --enable-dns64

I have to add a route to the subnet’s routing table to allow VPC to forward IPv6 packets prefixed by DNS64 to the NAT gateway. It tells it to route all packets with destination 64:ff9b::/96 to the NAT gateway.

aws ec2 create-route --route-table-id rtb-123 –-destination-ipv6-cidr-block 64:ff9b::/96 –-nat-gateway-id nat-123

The following diagram illustrates these two simple configuration changes.

NAT64 DNS64 afterWith these two simple changes, my IPv6-only workloads in the subnet may now communicate with IPv4 services. The IPv4 service might live in the same VPC, in another VPC, or anywhere on the internet.

You can continue to use your existing NAT gateway, and no change is required on the gateway itself or on the routing table attached to the NAT gateway subnet.

Pricing and Availability
These two new capabilities to the VPC NAT gateway and Route 53 are available today in all AWS Regions at no additional costs. Regular NAT gateway charges may apply.

Go and build your IPv6-only networks!

— seb

VMware Skyline Log Assist Support for SDDC Manager

This post was originally published on this site
SDDC Log Assist Workflow

Tweet As many of you know, Log Assist, is one of VMware Skyline’s most popular features, consistently ranking high among other core functionality, including Upgrade Recommendations, TAM Reports, and API Support. And it’s popular for a good reason. Log Assist Skyline streamlines the process of manually gathering and uploading a support bundle used by VMware … Continued

The post VMware Skyline Log Assist Support for SDDC Manager appeared first on VMware Support Insider.

Skyline Insights API – Creating a Ticket in ServiceNow

This post was originally published on this site

Tweet We have been asked on numerous occasions to show a demo of using the Skyline Insights API to send Proactive Findings data to Service Now.   Before we can show this, please make sure you have read this article detailing how to get the findings to ensure you understand the basics.  It is also a … Continued

The post Skyline Insights API – Creating a Ticket in ServiceNow appeared first on VMware Support Insider.

Using Snort IDS Rules with NetWitness PacketDecoder, (Sat, Feb 26th)

This post was originally published on this site

NetWitness has the ability to load Snort rules on its PacketDecoder to detect and alert suspicious activity. Since it is practical to be able to see the signature makeup and what it is looking for, I created a script that parses the Snort rule tarball into a single file (list.rules), which can be pushed and loaded in all the PacketDecoders. The scripts also parse each signature into a single HTML file that can be queried to review the signature to understand what the alert is matching.

AA22-057A: Destructive Malware Targeting Organizations in Ukraine

This post was originally published on this site

Original release date: February 26, 2022


Actions to Take Today:
• Set antivirus and antimalware programs to conduct regular scans.
• Enable strong spam filters to prevent phishing emails from reaching end users.
• Filter network traffic.
• Update software.
• Require multifactor authentication.

Leading up to Russia’s unprovoked attack against Ukraine, threat actors deployed destructive malware against organizations in Ukraine to destroy computer systems and render them inoperable. 

  • On January 15, 2022, the Microsoft Threat Intelligence Center (MSTIC) disclosed that malware, known as WhisperGate, was being used to target organizations in Ukraine. According to Microsoft, WhisperGate is intended to be destructive and is designed to render targeted devices inoperable. 
  • On February 23, 2022, several cybersecurity researchers disclosed that malware known as HermeticWiper was being used against organizations in Ukraine. According to SentinelLabs, the malware targets Windows devices, manipulating the master boot record, which results in subsequent boot failure. 

Destructive malware can present a direct threat to an organization’s daily operations, impacting the availability of critical assets and data. Further disruptive cyberattacks against organizations in Ukraine are likely to occur and may unintentionally spill over to organizations in other countries. Organizations should increase vigilance and evaluate their capabilities encompassing planning, preparation, detection, and response for such an event. 

This joint Cybersecurity Advisory (CSA) between the Cybersecurity and Infrastructure Security Agency (CISA) and Federal Bureau of Investigation (FBI) provides information on WhisperGate and HermeticWiper malware as well as open-source indicators of compromise (IOCs) for organizations to detect and prevent the malware. Additionally, this joint CSA provides recommended guidance and considerations for organizations to address as part of network architecture, security baseline, continuous monitoring, and incident response practices.

Click here for a PDF version of this report.

Technical Details

Threat actors have deployed destructive malware, including both WhisperGate and HermeticWiper, against organizations in Ukraine to destroy computer systems and render them inoperable. Listed below are high-level summaries of campaigns employing the malware. CISA recommends organizations review the resources listed below for more in-depth analysis and see the Mitigation section for best practices on handling destructive malware.   

On January 15, 2022, Microsoft announced the identification of a sophisticated malware operation targeting multiple organizations in Ukraine. The malware, known as WhisperGate, has two stages that corrupts a system’s master boot record, displays a fake ransomware note, and encrypts files based on certain file extensions. Note: although a ransomware message is displayed during the attack, Microsoft highlighted that the targeted data is destroyed, and is not recoverable even if a ransom is paid. See Microsoft’s blog on Destructive malware targeting Ukrainian organizations for more information and see the IOCs in table 1. 

Table 1: IOCs associated with WhisperGate

Name File Category File Hash Source
WhisperGate   stage1.exe 


Microsoft MSTIC  
WhisperGate stage2.exe


Microsoft MSTIC


On February 23, 2022, cybersecurity researchers disclosed that malware known as HermeticWiper was being used against organizations in Ukraine. According to SentinelLabs, the malware targets Windows devices, manipulating the master boot record and resulting in subsequent boot failure. Note: according to Broadcom, “[HermeticWiper] has some similarities to the earlier WhisperGate wiper attacks against Ukraine, where the wiper was disguised as ransomware.” See the following resources for more information and see the IOCs in table 2 below. 

Table 2: IOCs associated with HermeticWiper

Name File Category File Hash Source
Win32/KillDisk.NCV Trojan 912342F1C840A42F6B74132F8A7C4FFE7D40FB77
ESET research
HermeticWiper Win32 EXE 912342f1c840a42f6b74132f8a7c4ffe7d40fb77


HermeticWiper Win32 EXE 61b25d11392172e587d8da3045812a66c3385451


RCDATA_DRV_X64 ms-compressed a952e288a1ead66490b3275a807f52e5


RCDATA_DRV_X86 ms-compressed 231b3385ac17e41c5bb1b1fcb59599c4


RCDATA_DRV_XP_X64 ms-compressed 095a1678021b034903c85dd5acb447ad


RCDATA_DRV_XP_X86  ms-compressed eb845b7a16ed82bd248e395d9852f467


Trojan.Killdisk Trojan.Killdisk  1bc44eef75779e3ca1eefb8ff5a64807dbc942b1e4a2672d77b9f6928d292591 Symantec Threat Hunter Team
Trojan.Killdisk Trojan.Killdisk 0385eeab00e946a302b24a91dea4187c1210597b8e17cd9e2230450f5ece21da  Symantec Threat Hunter Team
Trojan.Killdisk Trojan.Killdisk a64c3e0522fad787b95bfb6a30c3aed1b5786e69e88e023c062ec7e5cebf4d3e Symantec Threat Hunter Team
Ransomware Trojan.Killdisk 4dc13bb83a16d4ff9865a51b3e4d24112327c526c1392e14d56f20d6f4eaf382 Symantec Threat Hunter Team


Best Practices for Handling Destructive Malware

As previously noted above, destructive malware can present a direct threat to an organization’s daily operations, impacting the availability of critical assets and data. Organizations should increase vigilance and evaluate their capabilities, encompassing planning, preparation, detection, and response, for such an event. This section is focused on the threat of malware using enterprise-scale distributed propagation methods and provides recommended guidance and considerations for an organization to address as part of their network architecture, security baseline, continuous monitoring, and incident response practices. 

CISA and the FBI urge all organizations to implement the following recommendations to increase their cyber resilience against this threat.

Potential Distribution Vectors

Destructive malware may use popular communication tools to spread, including worms sent through email and instant messages, Trojan horses dropped from websites, and virus-infected files downloaded from peer-to-peer connections. Malware seeks to exploit existing vulnerabilities on systems for quiet and easy access.

The malware has the capability to target a large scope of systems and can execute across multiple systems throughout a network. As a result, it is important for organizations to assess their environment for atypical channels for malware delivery and/or propagation throughout their systems. Systems to assess include:

  • Enterprise applications – particularly those that have the capability to directly interface with and impact multiple hosts and endpoints. Common examples include:
    • Patch management systems,
    • Asset management systems,
    • Remote assistance software (typically used by the corporate help desk),
    • Antivirus (AV) software,
    • Systems assigned to system and network administrative personnel,
    • Centralized backup servers, and
    • Centralized file shares.

While not only applicable to malware, threat actors could compromise additional resources to impact the availability of critical data and applications. Common examples include:

  • Centralized storage devices
    • Potential risk – direct access to partitions and data warehouses.
  • Network devices
    • Potential risk – capability to inject false routes within the routing table, delete specific routes from the routing table, remove/modify, configuration attributes, or destroy firmware or system binaries—which could isolate or degrade availability of critical network resources.

Best Practices and Planning Strategies

Common strategies can be followed to strengthen an organization’s resilience against destructive malware. Targeted assessment and enforcement of best practices should be employed for enterprise components susceptible to destructive malware.

Communication Flow
  • Ensure proper network segmentation.
  • Ensure that network-based access control lists (ACLs) are configured to permit server-to-host and host-to-host connectivity via the minimum scope of ports and protocols and that directional flows for connectivity are represented appropriately.
    • Communications flow paths should be fully defined, documented, and authorized.
  • Increase awareness of systems that can be used as a gateway to pivot (lateral movement) or directly connect to additional endpoints throughout the enterprise.
    • Ensure that these systems are contained within restrictive Virtual Local Area Networks (VLANs), with additional segmentation and network access controls.
  • Ensure that centralized network and storage devices’ management interfaces reside on restrictive VLANs.
    • Layered access control, and
    • Device-level access control enforcement – restricting access from only pre-defined VLANs and trusted IP ranges.
Access Control
  • For enterprise systems that can directly interface with multiple endpoints:
    • Require multifactor authentication for interactive logons.
    • Ensure that authorized users are mapped to a specific subset of enterprise personnel.
      • If possible, the “Everyone,” “Domain Users,” or the “Authenticated Users” groups should not be permitted the capability to directly access or authenticate to these systems.
    • Ensure that unique domain accounts are used and documented for each enterprise application service.
      • Context of permissions assigned to these accounts should be fully documented and configured based upon the concept of least privilege.
      • Provides an enterprise with the capability to track and monitor specific actions correlating to an application’s assigned service account.
    • If possible, do not grant a service account with local or interactive logon permissions.
      • Service accounts should be explicitly denied permissions to access network shares and critical data locations.
    • Accounts that are used to authenticate to centralized enterprise application servers or devices should not contain elevated permissions on downstream systems and resources throughout the enterprise.
  • Continuously review centralized file share ACLs and assigned permissions.
    • Restrict Write/Modify/Full Control permissions when possible.
  • Audit and review security logs for anomalous references to enterprise-level administrative (privileged) and service accounts.
    • Failed logon attempts,
    • File share access, and
    • Interactive logons via a remote session.
  • Review network flow data for signs of anomalous activity, including:
    • Connections using ports that do not correlate to the standard communications flow associated with an application,
    • Activity correlating to port scanning or enumeration, and
    • Repeated connections using ports that can be used for command and control purposes.
  • Ensure that network devices log and audit all configuration changes.
    • Continually review network device configurations and rule sets to ensure that communications flows are restricted to the authorized subset of rules.
File Distribution
  • When deploying patches or AV signatures throughout an enterprise, stage the distributions to include a specific grouping of systems (staggered over a pre-defined period).
    • This action can minimize the overall impact in the event that an enterprise patch management or AV system is leveraged as a distribution vector for a malicious payload.
  • Monitor and assess the integrity of patches and AV signatures that are distributed throughout the enterprise.
    • Ensure updates are received only from trusted sources,
    • Perform file and data integrity checks, and
    • Monitor and audit – as related to the data that is distributed from an enterprise application.
System and Application Hardening
  • Ensure robust vulnerability management and patching practices are in place. 
    • CISA maintains a living catalog of known exploited vulnerabilities that carry significant risk to federal agencies as well as public and private sectors entities. In addition to thoroughly testing and implementing vendor patches in a timely—and, if possible, automated— manner, organizations should ensure patching of the vulnerabilities CISA includes in this catalog.
  • Ensure that the underlying operating system (OS) and dependencies (e.g., Internet Information Services [IIS], Apache, Structured Query Language [SQL]) supporting an application are configured and hardened based upon industry-standard best practice recommendations. Implement application-level security controls based on best practice guidance provided by the vendor. Common recommendations include:
    • Use role-based access control,
    • Prevent end-user capabilities to bypass application-level security controls,
      • For example, do not allow users to disable AV on local workstations.
    • Remove, or disable unnecessary or unused features or packages, and
    • Implement robust application logging and auditing.
Recovery and Reconstitution Planning

A business impact analysis (BIA) is a key component of contingency planning and preparation. The overall output of a BIA will provide an organization with two key components (as related to critical mission/business operations):

  • Characterization and classification of system components, and
  • Interdependencies.

Based upon the identification of an organization’s mission critical assets (and their associated interdependencies), in the event that an organization is impacted by destructive malware, recovery and reconstitution efforts should be considered.

To plan for this scenario, an organization should address the availability and accessibility for the following resources (and should include the scope of these items within incident response exercises and scenarios):

  • Comprehensive inventory of all mission critical systems and applications:
    • Versioning information,
    • System/application dependencies,
    • System partitioning/storage configuration and connectivity, and
    • Asset owners/points of contact.
  • Contact information for all essential personnel within the organization,
  • Secure communications channel for recovery teams,
  • Contact information for external organizational-dependent resources:
    • Communication providers,
    • Vendors (hardware/software), and
    • Outreach partners/external stakeholders
  • Service contract numbers – for engaging vendor support,
  • Organizational procurement points of contact,
  • Optical disc image (ISO)/image files for baseline restoration of critical systems and applications:
    • OS installation media,
    • Service packs/patches,
    • Firmware, and
    • Application software installation packages.
  • Licensing/activation keys for OS and dependent applications,
  • Enterprise network topology and architecture diagrams,
  • System and application documentation,
  • Hard copies of operational checklists and playbooks,
  • System and application configuration backup files,
  • Data backup files (full/differential),
  • System and application security baseline and hardening checklists/guidelines, and
  • System and application integrity test and acceptance checklists.
Incident Response

Victims of a destructive malware attacks should immediately focus on containment to reduce the scope of affected systems. Strategies for containment include:

  • Determining a vector common to all systems experiencing anomalous behavior (or having been rendered unavailable)—from which a malicious payload could have been delivered:
    • Centralized enterprise application,
    • Centralized file share (for which the identified systems were mapped or had access),
    • Privileged user account common to the identified systems,
    • Network segment or boundary, and
    • Common Domain Name System (DNS) server for name resolution.
  • Based upon the determination of a likely distribution vector, additional mitigation controls can be enforced to further minimize impact:
    • Implement network-based ACLs to deny the identified application(s) the capability to directly communicate with additional systems,
      • Provides an immediate capability to isolate and sandbox specific systems or resources.
    • Implement null network routes for specific IP addresses (or IP ranges) from which the payload may be distributed,
      • An organization’s internal DNS can also be leveraged for this task, as a null pointer record could be added within a DNS zone for an identified server or application.
    • Readily disable access for suspected user or service account(s),
    • For suspect file shares (which may be hosting the infection vector), remove access or disable the share path from being accessed by additional systems, and
    • Be prepared to, if necessary, reset all passwords and tickets within directories (e.g., changing golden/silver tickets). 

As related to incident response and incident handling, organizations are encouraged to report incidents to the FBI and CISA (see the Contact section below) and to preserve forensic data for use in internal investigation of the incident or for possible law enforcement purposes. See Technical Approaches to Uncovering and Remediating Malicious Activity for more information.

Contact Information

To report suspicious or criminal activity related to information found in this Joint Cybersecurity Advisory, contact your local FBI field office at, or the FBI’s 24/7 Cyber Watch (CyWatch) at (855) 292-3937 or by email at When available, please include the following information regarding the incident: date, time, and location of the incident; type of activity; number of people affected; type of equipment used for the activity; the name of the submitting company or organization; and a designated point of contact. To request incident response resources or technical assistance related to these threats, contact CISA at



  • February 26, 2022: Initial Revision

This product is provided subject to this Notification and this Privacy & Use policy.

Ukraine & Russia From a Domain Names Perspective , (Thu, Feb 24th)

This post was originally published on this site

For a few days, the eyes of the world are on the situation between Russia and Ukraine. Today, operations are also organized in the "cyber" dimension (the classic ones being land, air, sea, and space). This new dimension is not only used for attacks like DDoS against the enemy but also for propaganda like spreading fake news.

A Good Old Equation Editor Vulnerability Delivering Malware, (Tue, Feb 22nd)

This post was originally published on this site

Here is another sample demonstrating how attackers still rely on good old vulnerabilities…  In 2017, Microsoft Office suffered from a critical vulnerability that affected its Equation Editor tool, known as CVE-2017-11882[1]. It's a memory corruption vulnerability that leads to remote code execution, pretty bad. It was heavily exploited at this time and I was curious to find a new document spread with the same good old vulnerability.

Happy 10th Birthday, DynamoDB! 🎉🎂🎁

This post was originally published on this site

On January 18th 2012, Jeff and Werner announced the general availability of Amazon DynamoDB, a fully managed flexible NoSQL database service for single-digit millisecond performance at any scale.

During the last 10 years, hundreds of thousands of customers have adopted DynamoDB. It regularly reaches new peaks of performance and scalability. For example, during the last Prime Day sales in June 2021, it handled trillions of requests over 66 hours while maintaining single-digit millisecond performance and peaked at 89.2 million requests per second. Disney+ uses DynamoDB to ingest content, metadata, and billions of viewers actions each day. Even during unprecedented demands caused by the pandemic, DynamoDB was able to help customers as many across the world had to change their way of working, needing to meet and conduct business virtually. For example, Zoom was able to scale from 10 million to 300 million daily meeting participants when we all started to make video calls in early 2020.

A decade of innovation with Amazon DynamoDB

On this special anniversary, join us for an unique online event on Twitch on March 1st. I’ll tell you more about this at the end of this post. But before talking about this event, let’s take this opportunity to reflect back on the genesis of this service and the main capabilities we added since the original launch 10 years ago.

The History Behind DynamoDB
The story of DynamoDB started long before the launch 10 years ago. It started with a series of outages on Amazon’s e-commerce platform during the holiday shopping season in 2004. At that time, Amazon was transitioning from a monolithic architecture to microservices. The design principle was (and still is) that each stateful microservice uses its own data store, and other services are required to access a microservice’s data through a publicly exposed API. Direct database access was not an option anymore. At that time, most microservices were using a relational database provided by a third-party vendor. Given the volume of traffic during the holiday season in 2004, the database system experienced some hard-to-debug and hard-to-reproduce deadlocks. The e-commerce platform was pushing the relational databases to their limits, despite the fact that we were using simple usage patterns, such as query by primary keys only. These usage patterns do not require the complexity of a relational database.

At Amazon and AWS, after an outage happens, we start a process called Correction of Error (COE) to document the root cause of the issue, to describe how we fixed it, and to detail the changes we’re making to avoid recurrence. During the COE for this database issue, a young, naïve, 20-year-old intern named Swaminathan (Swami) Sivasubramanian (now VP of the database, analytics, and ML organization at AWS) asked the question, “Why are we using a relational database for this? These workloads don’t need the SQL level of complexity and transactional guarantees.”

This led Amazon to rethink the architecture of its data stores and to build the original Dynamo database. The objective was to address the demanding scalability and reliability requirements of the Amazon e-commerce platform. This non-relational, key-value database was initially targeted at use cases that were the core of the Amazon e-commerce operations, such as the shopping basket and the session service.

AWS published the Dynamo paper in 2007, three years later, to describe our design principles and provide the lessons learned from running this database to support Amazon’s core e-commerce operations. Over the years, we saw several Dynamo clones appear, proving other companies were searching for scalable solutions, just like Amazon.

After a couple of years, Dynamo was adopted by several core service teams at Amazon. Their engineers were very satisfied with the performance and scalability. However, we started to interview engineers to understand why it was not more broadly adopted within Amazon. We learned Dynamo was giving teams the reliability, performance, and scalability they needed, but it did not simplify the operational complexity of running the system. Teams were still needed to install, configure, and operate the system in Amazon’s data centers.

At the time, AWS was proposing Amazon SimpleDB as a NoSQL service. Many teams preferred the operational simplicity of SimpleDB despite the difficulties to scale a domain beyond 10 GB, its non-predictable latency (it was affected by the size of the database and its indexes), and its eventual consistency model.

We concluded the ideal solution would combine the strengths of Dynamo—the scalability and the predictable low latency to retrieve data—with the operational simplicity of SimpleDB—just having a table to declare and let the system handle the low-level complexity transparently.

DynamoDB was born.

DynamoDB frees developers from the complexity of managing hardware and software. It handles all the complexity of scaling partitions and re-partitions your data to meet your throughput requirements. It scales seamlessly without the need to manually re-partition tables, and it provides predictable low latency access to your data (single-digit milliseconds).

At AWS, the moment we launch a new service is not the end of the project. It is actually the beginning. Over the last 10 years, we have continuously listened to your feedback, and we have brought new capabilities to DynamoDB. In addition to hundreds of incremental improvements, we added:

… and many more.

Lastly, during the last AWS re:Invent conference, we announced Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). This new DynamoDB table class allows you to lower the cost of data storage for infrequently accessed data by 60%. The ideal use case is for data that you need to keep for the long term and that your application needs to occasionally access, without compromising on access latency. In the past, to lower storage costs for such data, you were writing code to move infrequently accessed data to lower-cost storage alternatives, such as Amazon Simple Storage Service (Amazon S3). Now you can switch to the DynamoDB Standard-IA table class to store infrequently accessed data while preserving the high availability and performance of DynamoDB.

How To Get Started
To get started with DynamoDB, as a developer, you can refer to the Getting Started Guide in our documentation or read the excellent DynamoDB, Explained, written by Alex DeBrie, one of our AWS Heroes, and author of The DynamoDB Book. To dive deep into DynamoDB data modeling, AWS Hero Jeremy Daly is preparing a video course “DynamoDB Modeling for the rest of us“.

Customers now leverage DynamoDB across virtually any industry vertical, geographic area, and company size. You are continually surprising us with how you innovate on DynamoDB, and you are continually pushing us to continue to evolve DynamoDB to make it easier to build the next generation of applications. We are going to continue to work backwards from your feedback to meet your ever evolving needs and to enable you to innovate and scale for decades to come.

A Decade of Innovation with DynamoDB – A Virtual Event
As I mentioned at the beginning, we also would love to celebrate this anniversary with you. We prepared a live Twitch event for you to learn best practices, see technical demos, and attend a live Q&A. You will hear stories from two of our long-time customers : SmugMug CEO Don MacAskill, and engineering leaders from Dropbox. In addition, you’ll get a chance to ask your questions to and chat with AWS’ blog legend and Chief Evangelist Jeff Barr, and DynamoDB‘s product managers and engineers. Finally, AWS heroes Alex DeBrie and Jeremy Daly will host two deep dive technical sessions. Have a look at the full agenda here.

This will be live on Twitch on March 1st, you can register today. The first 1,000 registrants from US will receive a free digital copy of the DynamoDB book (this has a $79 retail value).

To DynamoDB’s next 10 years. Cheers 🥂.

— seb

New for Amazon CodeGuru Reviewer – Detector Library and Security Detectors for Log-Injection Flaws

This post was originally published on this site

Amazon CodeGuru Reviewer is a developer tool that detects security vulnerabilities in your code and provides intelligent recommendations to improve code quality. For example, CodeGuru Reviewer introduced Security Detectors for Java and Python code to identify security risks from the top ten Open Web Application Security Project (OWASP) categories and follow security best practices for AWS APIs and common crypto libraries. At re:Invent, CodeGuru Reviewer introduced a secrets detector to identify hardcoded secrets and suggest remediation steps to secure your secrets with AWS Secrets Manager. These capabilities help you find and remediate security issues before you deploy.

Today, I am happy to share two new features of CodeGuru Reviewer:

  • A new Detector Library describes in detail the detectors that CodeGuru Reviewer uses when looking for possible defects and includes code samples for both Java and Python.
  • New security detectors have been introduced for detecting log-injection flaws in Java and Python code, similar to what happened with the recent Apache Log4j vulnerability we described in this blog post.

Let’s see these new features in more detail.

Using the Detector Library
To help you understand more clearly which detectors CodeGuru Reviewer uses to review your code, we are now sharing a Detector Library where you can find detailed information and code samples.

These detectors help you build secure and efficient applications on AWS. In the Detector Library, you can find detailed information about CodeGuru Reviewer’s security and code quality detectors, including descriptions, their severity and potential impact on your application, and additional information that helps you mitigate risks.

Note that each detector looks for a wide range of code defects. We include one noncompliant and compliant code example for each detector. However, CodeGuru uses machine learning and automated reasoning to identify possible issues. For this reason, each detector can find a range of defects in addition to the explicit code example shown on the detector’s description page.

Let’s have a look at a few detectors. One detector is looking for insecure cross-origin resource sharing (CORS) policies that are too permissive and may lead to loading content from untrusted or malicious sources.

Detector Library screenshot.

Another detector checks for improper input validation that can enable attacks and lead to unwanted behavior.

Detector Library screenshot.

Specific detectors help you use the AWS SDK for Java and the AWS SDK for Python (Boto3) in your applications. For example, there are detectors that can detect hardcoded credentials, such as passwords and access keys, or inefficient polling of AWS resources.

New Detectors for Log-Injection Flaws
Following the recent Apache Log4j vulnerability, we introduced in CodeGuru Reviewer new detectors that check if you’re logging anything that is not sanitized and possibly executable. These detectors cover the issue described in CWE-117: Improper Output Neutralization for Logs.

These detectors work with Java and Python code and, for Java, are not limited to the Log4j library. They don’t work by looking at the version of the libraries you use, but check what you are actually logging. In this way, they can protect you if similar bugs happen in the future.

Detector Library screenshot.

Following these detectors, user-provided inputs must be sanitized before they are logged. This avoids having an attacker be able to use this input to break the integrity of your logs, forge log entries, or bypass log monitors.

Availability and Pricing
These new features are available today in all AWS Regions where Amazon CodeGuru is offered. For more information, see the AWS Regional Services List.

The Detector Library is free to browse as part of the documentation. For the new detectors looking for log-injection flaws, standard pricing applies. See the CodeGuru pricing page for more information.

Start using Amazon CodeGuru Reviewer today to improve the security of your code.