We have been following all the awesome product announcements at AWS re:Invent 2018 closely. Amazon has released a number of interesting new services and also big improvements to existing services. In this blog post we'll take a quick look at the most interesting announcements from Clouden's perspective, divided into four main topics.
AWS Lambda, the FaaS service that runs at the heart of AWS based serverless applications, now supports custom runtime environments. This means it's now possible to use any programming language to develop Lambda functions by implementing a custom runtime for it.
Amazon has released official Ruby language support for Lambda and reference implementations of custom runtimes for C++ and Rust support. You can also find various third party runtimes on the AWS Lambda Partners page, including PHP, Erlang and Cobol (!) support.
At Clouden we mostly use TypeScript to develop serverless applications and it's already supported out of the box. But we do have some new service ideas that might make use of the new PHP runtime. The Layering support that was also added to Lambda will make it much easier to share certain critical code between multiple Lambda functions. For instance, when a service's data models are updated, we want to deploy the new models to all functions right away. We're also looking forward to see how Layering affects Lambda function cold start times and potentially improves response times.
One more new serverless feature announced are the new service integrations for AWS Step Functions. This means it's possible to implement even more use cases using Amazon's state machines, avoiding the need to write custom code. Some new supported services are DynamoDB databases, SNS topics, SQS queues and many others. Instead of writing a custom Lambda function for every action, you can read and write DynamoDB items, publish messages to SNS topics and send messages to SQS queues directly from Step Functions actions. This will simplify several of the background processes Clouden uses to implement our services.
#Databases and Blockchains
The On-Demand Pricing model means that you no longer need to provision DynamoDB tables using a fixed capacity with an hourly price. Instead, you can choose to pay for the individual read and write operations your application actually makes. The unit price is higher, but you completely avoid paying for all that time your table sits idle. This is very useful for small services with many tables that are not being used all the time, but might have occasional high usage peaks.
The new DynamoDB Transaction support allows us to use DynamoDB for use cases that require transactional processing across several tables. A typical example of this would be payment processing, where credit balances, service entitlements, invoices and various other items must be processed securely in a synchronized way. DynamoDB's Serializable Transactions make sure that either the entire operation succeeds or fails as a whole, keeping everything in sync. Previously these operations have usually required an SQL database, which is much more complex to maintain and difficult to scale.
Amazon also released a few other interesting databases. Amazon Timestream is a new general-purpose time series database optimized for use cases where data accumulates over time and is often queried for specific time intervals. It's fully managed and has a serverless pricing model, where you pay for write requests, amount of data queried, and amount of data stored. You can store data in memory, SSD or magnetic storage depending on the use case, and set retention policies to automatically managed the data. We will be evaluating Timestream as a time series database solution for our Clouden Ping service, which currently uses a custom DynamoDB and S3 based solution.
Another specialized database just released is Amazon QLDB (Quantum Ledger Database). It is basically an append-only ledger, where you can store information that should remain immutable and readable forever. This resembles blockchains, but it is not a peer-to-peer solution and instead fully hosted in the cloud. The pricing model is serverless, based on the number of read and write operations and the amount of data stored and transferred. We think the solution might be a very good fit for storing customer usage metrics, invoice history and other accounting data used for billing purposes. QLDB's per-request price is about half of DynamoDB's On-Demand Capacity, and data storage cost is almost one tenth, so you also get cost benefits when compared to a custom solution.
Amazon also announced a new Managed Blockchain service, which will suppor both Hyperledger and Ethereum blockchains. It's very useful to get a managed service for connecting to the Ethereum network, because running your own Ethereum client in the cloud can be tricky and expensive. Unfortunately the pricing is hour-based and you still need to deal with instance sizes and nodes. It would be much more useful to have a serverless pricing model, charged by the number of transactions made and providing an graph API to read the blockchain data that is common to everybody on the network.
#IoT and Edge Computing
Clouden has not yet released any IoT services, but we have lots of experience and expertise in this area and are considering several possibilities in the future. For this reason we are very interested in the related cloud services and software solutions that Amazon offers.
One of the biggest IoT announcements was the new AWS IoT Events service. It lets you model complex industrial processes and react to events that are triggered by combinations of messages from multiple pieces of equipment and other data sources. The service maintains user-defined system states, so that received messages can change the current state and the event logic can behave accordingly. This means you don't have to deploy a separate database to maintain the system states yourself. The pricing is serverless and you pay for the number of messages processed by your event logic.
AWS also announced IoT SiteWise, which includes an on-site edge gateway solution for collecting data to the cloud. This is different from the traditional MQTT based AWS IoT Core, where you are charged per message. The SiteWise pricing model is based on the amount of data uploaded to the cloud in gigabytes. You can also model your industrial processes, calculate aggregate values, and upload the data directly to a time series database.
Another interesting new service is the AWS IoT Things Graph. It's a tool for visually designing IoT models and applications. It also provides features for integrating with different kinds of IoT device protocols and data formats. This seems to be very similar to the popular Node-RED open source project. You'll be able to run Things Graph workflows locally on Greengrass based edge devices. The pricing is based on the number of edge deployments.
Related to IoT Things Graph, AWS also announced a number of new Greengrass features for edge computing. The new Connectors offer built-in connectivity from Greengrass devices to AWS cloud services like Kinesis Firehose and SNS, as well as many other targets.
On a slightly lower level, Amazon FreeRTOS has received Bluetooth Low Energy support. This will allow microcontroller based devices to work as gateways, reading data from Bluetooth devices and delivering it to AWS IoT.
All these features together make AWS a very compelling platform for building IoT applications and services without worrying about many of the underlying details of both the edge and the cloud. If we do release some IoT services later on, they always will be based on the AWS IoT platform features as much as possible.
#Cloud Architecture and Devops
AWS also released a useful new Well-Architected Tool for reviewing cloud architecture designs. You can define your workloads and then analyze their compliance with the AWS Well-Architected Framework by answering various checkbox questions. The tool offers videos and conceptual descriptions to help you learn and make sure you've understood all the points correctly. We will be using this tool to evaluate the design of Clouden services in the future.
On the operations side, AWS released a new CloudWatch Logs Insights feature which will take away some of the pain of analyzing service logs. You can run custom queries and aggregations against CloudWatch Logs directly in the AWS Console, charged by the amount of data analyzed. This eliminates the need to setup a separate Elasticsearch instance or other log indexing solution.
Finally, AWS also released a new long-term S3 data storage class called Glacier Deep Archive. At half the cost of regular Glacier, it's the most affordable way to store large amounts of backup data that almost never needs to be retrieved. Glacier Deep Archives can be retrieved in 12 hours when needed. The minimum storage period is 180 days and the pricing is $0.00099/GB per month ($1.01/TB). So if you have a terabyte of backups, the cost is about $12/year. The regular Glacier storage class costs about 4x as much. In addition to the new storage class, S3 also got some new features to simplify the backup and restore of Glacier data.
Clouden has been an AWS user since the beginning and this year's re:Invent announcements keep us convinced that AWS is the most advanced cloud platform available. In most cases it's also the most affordable, thanks to the request-based pricing models in many services.
This blog post has only covered a fraction of all the new announcements, but we hope it gives you some technical perspective of how Clouden develops its services. We recommend you to check out all the AWS Launchpad videos on Twitch and the re:Invent 2018 keynotes and announcements on YouTube, if you want to get to know everything. Werner Vogels' keynote is one of our personal favorites.