R730 SSD Drives for VMware VSAN 7.0

This post was originally published on this site

R730 SSD Drives for VMware VSAN 7.0

 

 

Hi All.

I am hoping someone can assist me.

We are going to build a new All Flash VSAN 7.0 setup with following specs.

Hardware model R730 PowerEdge.

DELL HBA 330 controller.

I have a doubt on disk selection.

8 drives in total(total drives we are going to use on R730).

I wish to use 1 TB SSD SAS drives for cache tier and 2 TB SSD SAS drives for capacity tier. The most economical SSD drives that are fully supported from both VSAN and DELL.

I’m not sure what SSD drives(with SAS protocol) are compatible with my R730 servers and also it should be on the vSAN HCL (which is crucial)

The VMware HCL does not mention the Dell part numbers so matching them is proving very difficult. My supplier is sending me pricing for drives that differ hugely but still hasn’t confirmed support on both platforms.

Please can someone help me identify which 1 TB SSD SAS drives for cache tier and 2 TB SSD SAS drives for capacity tier so that I can double check and use in R730 servers. Whatever we choose, that also should be listed on the VMware vSAN Hardware Compatibility list,

https://www.vmware.com/resources/compatibility/pdf/vi_vsan_guide.pdf    dell disk starts from page 237.

 

Thanks.

Manivel RR

 

WebEx Meetings and Productivity Tools Settings

This post was originally published on this site

I searched and do not find where anyone was successful with this. I am in the middle of trying to capture the WebEx Meetings App without the productivity tools to see if it is just the productivity tools that cause the problem.

 

 

 

I have looked at existing configs people have posted. I have used the profiler, I have stared at procmon until my head hurts on multiple days. I do not see any access denied or any issue that points to something not working but on every restart, the user has to login to WebEx.

 

Does anyone have it successfully working in their environment?

 

Here is my current non working config I borrowed from this elsewhere in the forum:

[IncludeRegistryTrees]

 

 

HKCUSoftwareCisco Systems, Inc.Cisco Webex Meetings Virtual Desktop

 

 

HKCUSoftwareCisco Systems, Inc.Cisco Webex Meetings

 

 

HKCUSoftwareActiveTouch

 

 

HKCUSOFTWAREWebEx

 

 

HKCUSoftwareWebEx_Outlook

 

 

HKLMSOFTWAREWow6432NodeWebex

 

 

HKLMSOFTWAREWow6432NodeWebEx_Outlook

 

 

HKLMSOFTWAREWow6432NodeWebEx_Notes

 

 

HKLMSOFTWAREWebEx

 

 

 

 

 

[IncludeFolderTrees]

 

 

<AppData>Webex

 

 

<LocalAppData>WebEx

 

 

<AppData>Roamingwebex

 

 

<UserProfile>AppDataLocalLowWebEx

 

 

 

 

 

 

 

 

[ExcludeFolderTrees]

 

 

<LocalAppData>CiscoUnified CommunicationsWebexMeetingsCSFLogs

Microsoft Teams installed in an App Package on AppVolumes 4.1 (2006)

This post was originally published on this site

We’ve successfully been using MS Teams on an appstack with 2.18.x without too many issues, but we are running into weird issues with the same setup using AppVolumes 4.x.

 

We have the Teams installer stored with our Office365 package and everything related to office365 seem to work, but Teams doesn’t load.   It seems to missing the c:programdata%username% folder where Teams is installed from.

 

IF we chose to install it with the ALLUSER=1 ALLUSERS=1 switches, it seems to want to validate the Horizon or Citrix Agents are also installed.  Our app stack builders historically do not have the horizon agents installed.  I’m willing to try this, but do not know if there are adverse affects with the agent installed when we build the packages.

 

 

Does anyone have any concrete ways to install Teams successfully with appvolumes?

We do have a use case where we do _not_ want it on the golden image because we have exempt staff that do not use Teams or an Office365 license.

Store and Access Time Series Data at Any Scale with Amazon Timestream – Now Generally Available

This post was originally published on this site

Time series are a very common data format that describes how things change over time. Some of the most common sources are industrial machines and IoT devices, IT infrastructure stacks (such as hardware, software, and networking components), and applications that share their results over time. Managing time series data efficiently is not easy because the data model doesn’t fit general-purpose databases.

For this reason, I am happy to share that Amazon Timestream is now generally available. Timestream is a fast, scalable, and serverless time series database service that makes it easy to collect, store, and process trillions of time series events per day up to 1,000 times faster and at as little as to 1/10th the cost of a relational database.

This is made possible by the way Timestream is managing data: recent data is kept in memory and historical data is moved to cost-optimized storage based on a retention policy you define. All data is always automatically replicated across multiple availability zones (AZ) in the same AWS region. New data is written to the memory store, where data is replicated across three AZs before returning success of the operation. Data replication is quorum based such that the loss of nodes, or an entire AZ, does not disrupt durability or availability. In addition, data in the memory store is continuously backed up to Amazon Simple Storage Service (S3) as an extra precaution.

Queries automatically access and combine recent and historical data across tiers without the need to specify the storage location, and support time series-specific functionalities to help you identify trends and patterns in data in near real time.

There are no upfront costs, you pay only for the data you write, store, or query. Based on the load, Timestream automatically scales up or down to adjust capacity, without the need to manage the underlying infrastructure.

Timestream integrates with popular services for data collection, visualization, and machine learning, making it easy to use with existing and new applications. For example, you can ingest data directly from AWS IoT Core, Amazon Kinesis Data Analytics for Apache Flink, AWS IoT Greengrass, and Amazon MSK. You can visualize data stored in Timestream from Amazon QuickSight, and use Amazon SageMaker to apply machine learning algorithms to time series data, for example for anomaly detection. You can use Timestream fine-grained AWS Identity and Access Management (IAM) permissions to easily ingest or query data from an AWS Lambda function. We are providing the tools to use Timestream with open source platforms such as Apache Kafka, Telegraf, Prometheus, and Grafana.

Using Amazon Timestream from the Console
In the Timestream console, I select Create database. I can choose to create a Standard database or a Sample database populated with sample data. I proceed with a standard database and I name it MyDatabase.

All Timestream data is encrypted by default. I use the default master key, but you can use a customer managed key that you created using AWS Key Management Service (KMS). In that way, you can control the rotation of the master key, and who has permissions to use or manage it.

I complete the creation of the database. Now my database is empty. I select Create table and name it MyTable.

Each table has its own data retention policy. First data is ingested in the memory store, where it can be stored from a minimum of one hour to a maximum of a year. After that, it is automatically moved to the magnetic store, where it can be kept up from a minimum of one day to a maximum of 200 years, after which it is deleted. In my case, I select 1 hour of memory store retention and 5 years of magnetic store retention.

When writing data in Timestream, you cannot insert data that is older than the retention period of the memory store. For example, in my case I will not be able to insert records older than 1 hour. Similarly, you cannot insert data with a future timestamp.

I complete the creation of the table. As you noticed, I was not asked for a data schema. Timestream will automatically infer that as data is ingested. Now, let’s put some data in the table!

Loading Data in Amazon Timestream
Each record in a Timestream table is a single data point in the time series and contains:

  • The measure name, type, and value. Each record can contain a single measure, but different measure names and types can be stored in the same table.
  • The timestamp of when the measure was collected, with nanosecond granularity.
  • Zero or more dimensions that describe the measure and can be used to filter or aggregate data. Records in a table can have different dimensions.

For example, let’s build a simple monitoring application collecting CPU, memory, swap, and disk usage from a server. Each server is identified by a hostname and has a location expressed as a country and a city.

In this case, the dimensions would be the same for all records:

  • country
  • city
  • hostname

Records in the table are going to measure different things. The measure names I use are:

  • cpu_utilization
  • memory_utilization
  • swap_utilization
  • disk_utilization

Measure type is DOUBLE for all of them.

For the monitoring application, I am using Python. To collect monitoring information I use the psutil module that I can install with:

pip3 install plutil

Here’s the code for the collect.py application:

import time
import boto3
import psutil

from botocore.config import Config

DATABASE_NAME = "MyDatabase"
TABLE_NAME = "MyTable"

COUNTRY = "UK"
CITY = "London"
HOSTNAME = "MyHostname" # You can make it dynamic using socket.gethostname()

INTERVAL = 1 # Seconds

def prepare_record(measure_name, measure_value):
    record = {
        'Time': str(current_time),
        'Dimensions': dimensions,
        'MeasureName': measure_name,
        'MeasureValue': str(measure_value),
        'MeasureValueType': 'DOUBLE'
    }
    return record


def write_records(records):
    try:
        result = write_client.write_records(DatabaseName=DATABASE_NAME,
                                            TableName=TABLE_NAME,
                                            Records=records,
                                            CommonAttributes={})
        status = result['ResponseMetadata']['HTTPStatusCode']
        print("Processed %d records. WriteRecords Status: %s" %
              (len(records), status))
    except Exception as err:
        print("Error:", err)


if __name__ == '__main__':

    session = boto3.Session()
    write_client = session.client('timestream-write', config=Config(
        read_timeout=20, max_pool_connections=5000, retries={'max_attempts': 10}))
    query_client = session.client('timestream-query')

    dimensions = [
        {'Name': 'country', 'Value': COUNTRY},
        {'Name': 'city', 'Value': CITY},
        {'Name': 'hostname', 'Value': HOSTNAME},
    ]

    records = []

    while True:

        current_time = int(time.time() * 1000)
        cpu_utilization = psutil.cpu_percent()
        memory_utilization = psutil.virtual_memory().percent
        swap_utilization = psutil.swap_memory().percent
        disk_utilization = psutil.disk_usage('/').percent

        records.append(prepare_record('cpu_utilization', cpu_utilization))
        records.append(prepare_record(
            'memory_utilization', memory_utilization))
        records.append(prepare_record('swap_utilization', swap_utilization))
        records.append(prepare_record('disk_utilization', disk_utilization))

        print("records {} - cpu {} - memory {} - swap {} - disk {}".format(
            len(records), cpu_utilization, memory_utilization,
            swap_utilization, disk_utilization))

        if len(records) == 100:
            write_records(records)
            records = []

        time.sleep(INTERVAL)

I start the collect.py application. Every 100 records, data is written in the MyData table:

$ python3 collect.py
records 4 - cpu 31.6 - memory 65.3 - swap 73.8 - disk 5.7
records 8 - cpu 18.3 - memory 64.9 - swap 73.8 - disk 5.7
records 12 - cpu 15.1 - memory 64.8 - swap 73.8 - disk 5.7
. . .
records 96 - cpu 44.1 - memory 64.2 - swap 73.8 - disk 5.7
records 100 - cpu 46.8 - memory 64.1 - swap 73.8 - disk 5.7
Processed 100 records. WriteRecords Status: 200
records 4 - cpu 36.3 - memory 64.1 - swap 73.8 - disk 5.7
records 8 - cpu 31.7 - memory 64.1 - swap 73.8 - disk 5.7
records 12 - cpu 38.8 - memory 64.1 - swap 73.8 - disk 5.7
. . .

Now, in the Timestream console, I see the schema of the MyData table, automatically updated based on the data ingested:

Note that, since all measures in the table are of type DOUBLE, the measure_value::double column contains the value for all of them. If the measures were of different types (for example, INT or BIGINT) I would have more columns (such as measure_value::int and measure_value::bigint) .

In the console, I can also see a recap of which kind measures I have in the table, their corresponding data type, and the dimensions used for that specific measure:

Querying Data from the Console
I can query time series data using SQL. The memory store is optimized for fast point-in-time queries, while the magnetic store is optimized for fast analytical queries. However, queries automatically process data on all stores (memory and magnetic) without having to specify the data location in the query.

I am running queries straight from the console, but I can also use JDBC connectivity to access the query engine. I start with a basic query to see the most recent records in the table:

SELECT * FROM MyDatabase.MyTable ORDER BY time DESC LIMIT 8

Let’s try something a little more complex. I want to see the average CPU utilization aggregated by hostname in 5 minutes intervals for the last two hours. I filter records based on the content of measure_name. I use the function bin() to round time to a multiple of an interval size, and the function ago() to compare timestamps:

SELECT hostname,
       bin(time, 5m) as binned_time,
       avg(measure_value::double) as avg_cpu_utilization
  FROM MyDatabase.MyTable
 WHERE measure_name = ‘cpu_utilization'
   AND time > ago(2h)
 GROUP BY hostname, bin(time, 5m)

When collecting time series data you may miss some values. This is quite common especially for distributed architectures and IoT devices. Timestream has some interesting functions that you can use to fill in the missing values, for example using linear interpolation, or based on the last observation carried forward.

More generally, Timestream offers many functions that help you to use mathematical expressions, manipulate strings, arrays, and date/time values, use regular expressions, and work with aggregations/windows.

To experience what you can do with Timestream, you can create a sample database and add the two IoT and DevOps datasets that we provide. Then, in the console query interface, look at the sample queries to get a glimpse of some of the more advanced functionalities:

Using Amazon Timestream with Grafana
One of the most interesting aspects of Timestream is the integration with many platforms. For example, you can visualize your time series data and create alerts using Grafana 7.1 or higher. The Timestream plugin is part of the open source edition of Grafana.

I add a new GrafanaDemo table to my database, and use another sample application to continuously ingest data. The application simulates performance data collected from a microservice architecture running on thousands of hosts.

I install Grafana on an Amazon Elastic Compute Cloud (EC2) instance and add the Timestream plugin using the Grafana CLI.

$ grafana-cli plugins install grafana-timestream-datasource

I use SSH Port Forwarding to access the Grafana console from my laptop:

$ ssh -L 3000:<EC2-Public-DNS>:3000 -N -f ec2-user@<EC2-Public-DNS>

In the Grafana console, I configure the plugin with the right AWS credentials, and the Timestream database and table. Now, I can select the sample dashboard, distributed as part of the Timestream plugin, using data from the GrafanaDemo table where performance data is continuously collected:

Available Now
Amazon Timestream is available today in US East (N. Virginia), Europe (Ireland), US West (Oregon), and US East (Ohio). You can use Timestream with the console, the AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation. With Timestream, you pay based on the number of writes, the data scanned by the queries, and the storage used. For more information, please see the pricing page.

You can find more sample applications in this repo. To learn more, please see the documentation. It’s never been easier to work with time series, including data ingestion, retention, access, and storage tiering. Let me know what you are going to build!

Danilo

Adding a choice before component selection

This post was originally published on this site

Hello support.

 

I’m wondering if there is a way to have a drop down choice that gets after the license agreement and selecting the install directory but before component selection?  trying to control which components are visible based on a choice before the selection, but can’t find away to add a choice before component selection.

 

My plan B is to put both options in the component selection as components and use a non visible component with rules to allow only one of the option components to be selected, but this seems clunky compared to just asking the user to make a choice before selecting components.

PowerCLI and Powershell Windows 7.0.3: Exception: VMware.VimAutomation.HorizonView module is not currently supported on the Core edition of PowerShell

This post was originally published on this site

When I do an Import-Module VMware.PowerCLI I receive the following error:

 

Import-Module Vmware.PowerCLI

Exception: VMware.VimAutomation.HorizonView module is not currently supported on the Core edition of PowerShell.

 

PS 7.0.3 version:

PS C:> $psversiontable

Name                           Value
—-                           —–
PSVersion                      7.0.3
PSEdition                      Core
GitCommitId                    7.0.3
OS                             Microsoft Windows 10.0.14393
Platform                       Win32NT
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion      2.3
SerializationVersion           1.1.0.1
WSManStackVersion              3.0

 

PS C:> Get-Module -ListAvailable  -Name *Vmware* | FT -AutoSize

    Directory: C:UserslocalscriptuserDocumentsPowerShellModules

ModuleType Version         PreRelease Name                                    PSEdition ExportedCommands
———- ——-         ———- —-                                    ——— —————-
Script     12.0.0.15947289            VMware.CloudServices                    Desk      {Connect-Vcs, Get-VcsOrganizationRole, Get-VcsService, Get-VcsServiceRole…}
Script     7.0.0.15902843             VMware.DeployAutomation                 Desk      {Add-DeployRule, Add-ProxyServer, Add-ScriptBundle, Copy-DeployRule…}
Script     7.0.0.15902843             VMware.ImageBuilder                     Desk      {Add-EsxSoftwareDepot, Add-EsxSoftwarePackage, Compare-EsxImageProfile, Export-EsxImageProfile…}
Manifest   12.0.0.15947286            VMware.PowerCLI                         Desk
Script     7.0.0.15939650             VMware.Vim                              Desk
Script     12.0.0.15939657            VMware.VimAutomation.Cis.Core           Desk      {Connect-CisServer, Disconnect-CisServer, Get-CisService}
Script     12.0.0.15940183            VMware.VimAutomation.Cloud              Desk      {Add-CIDatastore, Connect-CIServer, Disconnect-CIServer, Get-Catalog…}
Script     12.0.0.15939652            VMware.VimAutomation.Common             Desk      {Get-Task, New-OAuthSecurityContext, Stop-Task, Wait-Task}
Script     12.0.0.15939655            VMware.VimAutomation.Core               Desk      {Add-PassthroughDevice, Add-VirtualSwitchPhysicalNetworkAdapter, Add-VMHost, Add-VMHostNtpServer…}
Script     12.0.0.15939647            VMware.VimAutomation.Hcx                Desk      {Connect-HCXServer, Disconnect-HCXServer, Get-HCXAppliance, Get-HCXComputeProfile…}
Script     7.12.0.15718406            VMware.VimAutomation.HorizonView        Desk      {Connect-HVServer, Disconnect-HVServer}
Script     12.0.0.15939670            VMware.VimAutomation.License            Desk      Get-LicenseDataManager
Script     12.0.0.15939671            VMware.VimAutomation.Nsxt               Desk      {Connect-NsxtServer, Disconnect-NsxtServer, Get-NsxtPolicyService, Get-NsxtService}
Script     12.0.0.15939651            VMware.VimAutomation.Sdk                Desk      {Get-ErrorReport, Get-PSVersion, Get-InstallPath}
Script     12.0.0.15939672            VMware.VimAutomation.Security           Desk      {Add-AttestationServiceInfo, Add-KeyProviderServiceInfo, Add-TrustAuthorityKeyProviderServer, Add-TrustAuthorityKeyProviderServerCertificate…}
Script     11.5.0.14899557            VMware.VimAutomation.Srm                Desk      {Connect-SrmServer, Disconnect-SrmServer}
Script     12.0.0.15939648            VMware.VimAutomation.Storage            Desk      {Add-EntityDefaultKeyProvider, Add-KeyManagementServer, Add-VsanFileServiceOvf, Add-VsanObjectToRepairQueue…}
Script     1.3.0.0                    VMware.VimAutomation.StorageUtility     Desk      Update-VmfsDatastore
Script     12.0.0.15940185            VMware.VimAutomation.Vds                Desk      {Add-VDSwitchPhysicalNetworkAdapter, Add-VDSwitchVMHost, Export-VDPortGroup, Export-VDSwitch…}
Script     12.0.0.15947287            VMware.VimAutomation.Vmc                Desk      {Add-VmcSddcHost, Connect-Vmc, Disconnect-Vmc, Get-AwsAccount…}
Script     12.0.0.15940184            VMware.VimAutomation.vROps              Desk      {Connect-OMServer, Disconnect-OMServer, Get-OMAlert, Get-OMAlertDefinition…}
Script     12.0.0.15947288            VMware.VimAutomation.WorkloadManagement Desk      {Get-WMNamespace, Get-WMNamespacePermission, Get-WMNamespaceStoragePolicy, New-WMNamespace…}
Script     6.5.1.7862888              VMware.VumAutomation                    Desk      {Add-EntityBaseline, Copy-Patch, Get-Baseline, Get-Compliance…}

Issue with installing Windows Server 2019 on ESXi 6.0.0

This post was originally published on this site

Now that I’ve got my additional space in ESXi, I am trying to install Windows Server 2019, but I can’t get vsphere client to boot to my Server 2019 ISO. Just for troubleshooting, I setup a new VM, attached Windows Server 2012 ISO and booted right up.  I looked up requirements and 2019 is supported with ESXi 6.0.  The 2019 ISO is valid, since I’ve used it with Hyper-V.

 

I can only go up to Windows Server 2012 when selecting a version.  Any ideas why I can’t boot to Server 2019?

SecretManagement and SecretStore Updates

This post was originally published on this site

Two updated preview releases are now available on the PowerShell Gallery:

Please note that these preview releases contain breaking changes. This version of SecretStore is incompatible with previous versions because the configuration format has changed. The previous file store cannot be read by the new version and you will need to reset the store Reset-SecretStore after installing the new version. Be sure to save your current stored secrets before upgrading so they can be re-added after you Reset your SecretStore.

To install these updates run the following commands:

Install-Module SecretManagement -Force -AllowPrerelease 
Install-Module SecretStore -Force -AllowPrerelease 
Reset-SecretStore

SecretManagement Preview 4 Updates

This update to SecretManagement addresses a bug blocking vault registration on Windows PowerShell, adds additional pipeline support, and makes several changes to improve the vault developer experience. Read the full list of changes below:

  • Fixes issue with registering extension vaults on Windows PowerShell (Error: Cannot bind argument to parameter ‘Path’ …)
  • Changes SecretVaultInfo VaultName property to Name, for consistency
  • Changes Test-SecretVault -Vault parameter changed to -Name for consistency
  • Adds -AllowClobber parameter switch to Register-SecretVault, to allow overwriting existing vault
  • Register-SecretVault uses module name as the friendly name if the -Name parameter is not used
  • Unregister-SecretVault now supports Name parameter argument from pipeline
  • Set-DefaultVault now supports Name and SecretVaultVaultInfo parameter arguments from pipeline
  • Set-Secret now supports SecretInfo objects from the pipeline
  • Adds -WhatIf support to Secret-Secret
  • Adds -WhatIf support to Remove-Secret

SecretStore Preview 2 Updates

This update to SecretStore includes updates to the cmdlet interface based on user feedback and desire to conform to PowerShell best practices. This update also updates error messaging to provide a clarified user experience. Read the full list of changes below:

  • Set-SecretStoreConfiguration now supports -Authentication and -Interaction parameters instead of -PasswordRequired and -DoNotPrompt switches
  • Update-SecretStorePassword has been renamed to Set-SecretStorePassword
  • Unlock-SecretStore no longer supports plain text passwords
  • Set-SecretStoreConfiguration now throws an error if called with no parameters
  • Added ProjectUri and “SecretManagement” tag to manifest file
  • Reset-SecretStore now defaults to ‘No’ when prompting to continue.

Tagging your extension vault on the PowerShell Gallery

If you are publishing an extension vault to the PowerShell Gallery we ask that you add the tag “SecretManagement” to the module manifest. This will the vault to be more discoverable for SecretManagement users.

Feedback and Support

As we approach General Availability for these modules now is the time to test the modules against your scenarios to request changes (especially breaking ones) and discover bugs. To file issues or get support for the SecretManagement interface or vault development experience please use the SecretManagement repository. For issues which pertain specifically to the SecretStore and its cmdlet interface please use the SecretStore repository.

Sydney Smith

PowerShell Team

 

The post SecretManagement and SecretStore Updates appeared first on PowerShell.

Amazon S3 on Outposts Now Available

This post was originally published on this site

AWS Outposts customers can now use Amazon Simple Storage Service (S3) APIs to store and retrieve data in the same way they would access or use data in a regular AWS Region. This means that many tools, apps, scripts, or utilities that already use S3 APIs, either directly or through SDKs, can now be configured to store that data locally on your Outposts.

AWS Outposts are a fully managed service that provides a consistent hybrid experience, with AWS installing the Outpost in your data center or colo facility. These Outposts are managed, monitored, and updated by AWS just like in the cloud. Customers use AWS Outposts to run services in their local environments, like Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), and Amazon Relational Database Service (RDS), and are ideal for workloads that require low latency access to on-premises systems, local data processing, or local data storage.

Outposts are connected to an AWS Region and are also able to access Amazon S3 in AWS Regions, however, this new feature will allow you to use the S3 APIs to store data on the AWS Outposts hardware and process it locally. You can use S3 on Outposts to satisfy demanding performance needs by keeping data close to on-premises applications. It will also benefit you if you want to reduce data transfers to AWS Regions, since you can perform filtering, compression, or other pre-processing on your data locally without having to send all of it to a region.

Speaking of keeping your data local, any objects and the associated metadata and tags are always stored on the Outpost and are never sent or stored elsewhere. However, it is essential to remember that if you have data residency requirements, you may need to put some guardrails in place to ensure no one has the permissions to copy objects manually from your Outposts to an AWS Region.

You can create S3 buckets on your Outpost and easily store and retrieve objects using the same Console, APIs, and SDKs that you would use in a regular AWS Region. Using the S3 APIs and features, S3 on Outposts makes it easy to store, secure, tag, retrieve, report on, and control access to the data on your Outpost.

S3 on Outposts provides a new Amazon S3 storage class, named S3 Outposts, which uses the S3 APIs, and is designed to durably and redundantly store data across multiple devices and servers on your Outposts. By default, all data stored is encrypted using server-side encryption with SSE-S3. You can optionally use server-side encryption with your own encryption keys (SSE-C) by specifying an encryption key as part of your object API requests.

When configuring your Outpost you can add 48 TB or 96 TB of S3 storage capacity, and you can create up to 100 buckets on each Outpost. If you have existing Outposts, you can add capacity via the AWS Outposts Console or speak to your AWS account team. If you are using no more than 11 TB of EBS storage on an existing Outpost today you can add up to 48 TB with no hardware changes on the existing Outposts. Other configurations will require additional hardware on the Outpost (if the hardware footprint supports this) in order to add S3 storage.

So let me show you how I can create an S3 bucket on my Outposts and then store and retrieve some data in that bucket.

Storing data using S3 on Outposts

To get started, I updated my AWS Command Line Interface (CLI) to the latest version. I can create a new Bucket with the following command and specify which outpost I would like the bucket created on by using the –outposts-id switch.

aws s3control create-bucket --bucket my-news-blog-bucket --outposts-id op-12345

In response to the command, I am given the ARN of the bucket. I take note of this as I will need it in the next command.

Next, I will create an Access point. Access points are a relatively new way to manage access to an S3 bucket. Each access point enforces distinct permissions and network controls for any request made through it. S3 on Outposts requires a Amazon Virtual Private Cloud configuration so I need to provide the VPC details along with the create-access-point command.

aws s3control create-access-point --account-id 12345 --name prod --bucket "arn:aws:s3-outposts:us-west-2:12345:outpost/op-12345/bucket/my-news-blog-bucket" --vpc-configuration VpcId=vpc-12345

S3 on Outposts uses endpoints to connect to Outposts buckets so that you can perform actions within your virtual private cloud (VPC). To create an endpoint, I run the following command.

aws s3outposts create-endpoint --outpost-id op-12345 --subnet-id subnet-12345 —security-group-id sg-12345

Now that I have set things up, I can start storing data. I use the put-object command to store an object in my newly created Amazon Simple Storage Service (S3) bucket.

aws s3api put-object --key my_news_blog_archives.zip --body my_news_blog_archives.zip --bucket arn:aws:s3-outposts:us-west-2:12345:outpost/op-12345/accesspoint/prod

Once the object is stored I can retrieve it by using the get-object command.

aws s3api get-object --key my_news_blog_archives.zip --bucket arn:aws:s3-outposts:us-west-2:12345:outpost/op-12345/accesspoint/prod my_news_blog_archives.zip

There we have it. I’ve managed to store an object and then retrieve it, on my Outposts, using S3 on Outposts.

Transferring Data from Outposts

Now that you can store and retrieve data on your Outposts, you might want to transfer results to S3 in an AWS Region, or transfer data from AWS Regions to your Outposts for frequent local access, processing, and storage. You can use AWS DataSync to do this with the newly launched support for S3 on Outposts.

With DataSync, you can choose which objects to transfer, when to transfer them, and how much network bandwidth to use. DataSync also encrypts your data in-transit, verifies data integrity in-transit and at-rest, and provides granular visibility into the transfer process through Amazon CloudWatch metrics, logs, and events.

Order today

If you want to start using S3 on Outposts, please visit the AWS Outposts Console, here you can add S3 storage to your existing Outposts or order an Outposts configuration that includes the desired amount of S3. If you’d like to discuss your Outposts purchase in more detail then contact our sales team.

Pricing with AWS Outposts works a little bit differently from most AWS services, in that it is not a pay-as-you-go service. You purchase Outposts capacity for a 3-year term and you can choose from a number of different payment schedules. There are a variety of AWS Outposts configurations featuring a combination of EC2 instance types and storage options. You can also increase your EC2 and storage capacity over time by upgrading your configuration. For more detailed information about pricing check out the AWS Outposts Pricing details.

Happy Storing

— Martin

KB article to create custom alerting is not working for me

This post was originally published on this site

Hi team,

 

Hoping to see if you can pin point the root cause of this 400 error pasted below. I am trying to create a new email template and I get 400 error.

 

VMware Knowledge Base

 

Executed step 3 from KB article by replacing IP address of the master node.

 

]# curl -k -X POST -i -H “X-vRealizeOps-API-use-unsupported: true” -H “Content-Type: application/json; charset=UTF-8” -u admin -d ‘{
“id” : null,
“name” : “Email Template 1”,
“html” : true,
“template” : “$$Subject=[Email Template 1 Subject] State:{{AlertCriticality}}, Name:{{AffectedResourceName}} nn New alert was generated at: {{AlertGenerateTime}} Info: {{AffectedResourceName}} {{AffectedResourceKind}}
Alert Definition Name: {{AlertDefinitionName}}
Alert Definition Description: {{AlertDefinitionDesc}}
Object Name : {{AffectedResourceName}}
Object Type : {{AffectedResourceKind}}
Alert Impact: {{AlertImpact}}
Alert State : {{AlertCriticality}}
Alert Type : {{AlertType}}
Alert Sub-Type : {{AlertSubType}}
Object Health State: {{ResourceHealthState}}
Object Risk State: {{ResourceRiskState}}
Object Efficiency State: {{ResourceEfficiencyState}}
Symptoms:
{{Anomalies}} Recommendations: {{AlertRecommendation}} vROps Server – {{vcopsServerName}} Alert details
“,
“others” : [ ],
“otherAttributes” : {}
}’ https://10.50.6.88/suite-api/api/notifications/email/templates
Enter host password for user ‘admin’:
HTTP/1.1 400 400
Date: Wed, 30 Sep 2020 19:01:08 GMT
Server: Apache
X-Request-ID: 4hh8Gupwd06DRmHiPoyKGdqIXSDJic1s
Access-Control-Allow-Origin: *
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
Content-Security-Policy: default-src https: data: ‘unsafe-inline’ ‘unsafe-eval’; child-src *
Vary: User-Agent
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?><ops:error xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xmlns:xs=”http://www.w3.org/2001/XMLSchema” xmlns:ops=”http://webservice.vmware.com/vRealizeOpsMgr/1.0/ httpStatusCode=”400″ apiErrorCode=”400″><ops:message>Invalid input format.</ops:message><ops:moreInformation><ops:info name=”api-uri”>/suite-api/api/notifications/email/templates</ops:info></ops:moreInformation></ops:error>

 

 

thanks,

Salman