Five IoT Hardware Features That Lead to Agile Development

Kurtis McBride and Frank Voisin of Scotland have established a huge 475,000 sq.ft. warehouse as a base for their dream project—an IoT hardware warehouse. As the Internet of Things (IoT) perpetuates business and lifestyle with varied implementations, the hardware systems have to gear themselves to match the imagination of the software solution providers. Here are five IoT hardware features for agile software solutions.1. Wi-Fi, Low-Power RF, Bluetooth, and Ethernet Support ChipA classic example of this feature is the OpenKontrol Gateway by Ciseco that can be used for 24 hours with the incredibly low power of just half a watt. It has built-in XRF radio, SD, RTC, SRAM and XV Wi-Fi module. The Arduino Yun is also a hybrid board with built-in Ethernet and Wi-Fi connectivity features.2. Linux Platform ComputingThe Raspberry Pi gave wings to our imagination of a chip that is built on Linux platform and has USB ports, a HDMI port, a Ethernet port, and a Wi-Fi facility. It has a vast community that makes IoT project development feasible. The Beaglebone Black is a low-cost development platform that runs in the Linux environment and built with ARM Cortex A8 processor. It has built-in Ethernet connectivity features and HDMI port to connect a monitor. It’s strong community makes it the best platform for IoT solution development3. Wi-Fi Module for MicrocontrollersWi-Fi modules give internet connectivity to microcontrollers via UART communication. The ESP8266 comes with TCP/IP protocol stack integrated into its system and these modules further come pre-programmed with AT commands which are easy to plug and play IoT projects. With GPIO pins, specific application devices and sensors can be used for playing specific modules.4. Prototyping PlatformsPrototyping platforms come fully equipped with Ethernet boards and WI-Fi connectivity features that make them essential for working with IoT. Netduino is a famous open source electronics prototyping platform built around STMicro STM32F4 controller and runs in .NET Micro Framework. Some platforms even offer additional features such as UART, I2C, SPI and SD card connectivity. Arudino boards are again familiar devices that come with built-in Internet connectivity and their Ethernet shield can be used for wired connections and the Wi-Fi shield can be used for wireless connection to the internet.5.  Multi-Development PlatformsA single chip for development on multiple platforms. These are definitely open source, community-driven and cost-effective. The UDOO is one such platform that works with Linux, Android, Google ADK 2012, and Arduino. The board provides a flexible environment to work with IoT innovations. HackBerry is also a popular system that works with Android and Linux PC with both WI-Fi and Ethernet.IoT hardware forms the foundation for executing robust software solutions and hence it is essential to IoT development success.Learn more about IoT and its possibilities : IoT Home Automation Solutions With iBeacons Read more

Split and Clone Editor Views in Eclipse

Sometimes it is all about knowing the simple tricks in Eclipse which make life easier. Like this one: How to have a split editor view so I can edit multiple different sections of a source file.That feature is present in Eclipse Luna and afterwards, but because there is no icon in the view itself as in Microsoft Word, I have found that many do not know about this useful feature. The screensthots below are for Eclipse Luna.Split Editor ViewTo split an editor view, I have it selected (to be active), then I use the menu ‘Toggle Split Editor’:I can split it horizontal:Or in a vertical way:I can use the mouse to resize the split area:To remove the split, simply use the menu or shortcut again:Clone Editor ViewThe other useful function is to clone an Editor view:This creates a clone of that view:To ‘undo’ the cloning, I close the new editor view.SummarySplitting and Cloning gives me a way to edit the same source file in different portions of that file. The commands to Clone and Split is under the Window > Editor menu.Happy Cloning and Splitting! Read more

The Life of a Serverless Microservice on AWS

In this post, I will demonstrate how you can develop, test, deploy, and operate a production-ready serverless microservice using the AWS ecosystem. The combination of AWS Lambda and Amazon API Gateway allows us to operate a REST endpoint without the need of any virtual machines. We will use Amazon DynamoDB as our database, Amazon CloudWatch for metrics and logs, and AWS CodeCommit and AWS CodePipeline as our delivery pipeline. In the end, you will know how to wire together a bunch of AWS services to run a system in production.The LifeMy idea of "The Life of a Serverless Microservice on AWS" is best described by this figure:A developer is pushing code changes to a repository. This git push triggers the CI & CD pipeline to deploy a new version of the service, which our users consume. The load generated on the system produces logs and metrics that are used by the developer to operate the system. The operational feedback is used to improve the quality of the system.What is Serverless?Serverless or Function as a Service (FaaS) describes the idea that the deployment unit is a single function. A function takes input and returns output. The responsibility of the FaaS user is to develop the function while the FaaS provider's responsible is to execute the function whenever some event happens. The following figure demonstrates this idea.Some possible events:File uploaded.E-Mail received.Database changed.Manual invoked.HTTP API called.Cron.The cool things about serverless architecture are:You only pay when the function is executed.No under/over provisioning.No boot time.No patching.No SSH.No load balancing.Read more about Serverless Architectures if you are interested in the details.What is a Microservice?Imagine a small system where users have a public visible profile page with location information of that user. The idea of a microservice architecture is that you slice your system into smaller units around bounded contexts. I identified three of them:Authentication Service: Handles authentication.Location Service: Manages location information via a private HTTP API. Uses the Authentication Service internally to authenticate requests.Profile Service: Stores and retrieves the profile via a public HTTP API. Makes an internal call to the Location Service to retrieve the location information.Each service gets its own database, and services are only to communicate with each other over well-defined APIs, not the database!Let's get started!The source code and installation instruction can be found at the bottom of this page. Please use the us-east-1 region! We will use services that are not available in other AWS regions at the moment.CodeAWS CodeCommit is a hosted Git repository that uses IAM for access control. You need to upload your public SSH key to your IAM User as shown in the following figure:Creating a repository is simple. Just click on the Create new Repository button in the AWS Management Console.We need a repository for each service. You can then clone the repository locally with the following command. Replace $SSHKeyID with the SSH Key ID of your IAM user and $RepositoryName with the name of your repository.git clone ssh://$SSHKeyID@git-codecommit.us-east-1.amazonaws.com/v1/repos/$RepositoryName` We now have a home for our code.Continuous Integration & Continuous DeliveryAWS CodePipeline is a service to manage a build and deployment pipeline. CodePipeline itself is only responsible triggering integrations to do things like:Build.TestDeploy.We need a pipeline for each service that:Downloads the sources from CodeCommit if something changes there.Runs our test and bundles the code in a zip file for Lambda.Deploys the zip file.Luckily, CodePipeline has native support for downloading sources from CodeCommit. To run our tests, we will use a third-party integration to trigger Solano CI to run our tests and bundle the source files. The deployment step is implemented in a Lambda function that triggers a CloudFormation stack update. A CloudFormation stack is a bunch of AWS resources managed by CloudFormation based on a template that you provide (Infrastructure as Code). Read more about CloudFormation on our blog.The following figure shows the pipeline:The cool thing about CloudFormation is that you can define the pipeline itself in a template. So we get Pipeline as Code.The CloudFormation template that is used for service deployment describes a Lambda function, a DynamoDB database, and an API Gateway. After deployment you will see one CloudFormation stack for each service:We now have a CI & CD pipeline.ServiceWe use a bunch of AWS services to run our microservices.Amazon API GatewayAPI Gateway is a service that offers a configurable REST API as a service. You describe what should happen if a certain HTTP Method (GET, POST,PUT, DELETE, ...) is called on a certain HTTP Resource (e.g. /user). In our case, we want to execute a Lambda function if an HTTP request comes in. API Gateway also takes care of mapping input and output data between formats. The following figure shows how this looks like in the AWS Management Console for the Profile Service.The API Gateway is a fully managed service. You only pay for requests, no under/over provisioning, no boot time, no patching, no SSH, no load balancing. AWS takes care of all those aspects.Read more about API Gateway on our blogAWS LambdaTo run code in AWS Lambda you need to:use one of the supported runtimes (Node.js (JavaScript), Python, JVM (Java, Scala, ...).implement a predefined interface.The interface in abstract terms requires a function that takes an input parameter and returns void, something, or throws an error.We will use the Node.js runtime where a function implementation looks like this:exports.handler = function(event, context, cb) { console.log(JSON.stringify(event)); // TODO do something cb(null, {name: 'Michael'}); }; In Node.js, the function is not expected to return something. Instead, you need to call the callback function cb that is passed into the function as a parameter.The following figure shows how this looks like in the AWS Management Console for the profile service.AWS Lambda is a fully managed service. You only pay for function executions, no under/over provisioning, no boot time, no patching, no SSH, no load balancing. AWS takes care of all those aspects.Read more about Lambda on our blogAmazon DynamoDBDynamoDB is a Key-Value-Store or Document-Store. You can lookup values by their key. DynamoDB replicates across multiple Availability Zones (data centers) and is eventually consistent.The following figure shows how this looks like in the AWS Management Console for the authentication service.Amazon DynamoDB is a 99% managed service. The 1% that is up to you is that you need to provision read and write capacity. When your service makes more request than provisioned, you will see errors. So it is your job to monitor the consumed capacity to increase the provisioned capacity before you run out of capacity.Read more about DynamoDB on our blogRequest FlowThe three services work together in the following way:The user's HTTP request hits API Gateway. API Gateway checks if the request is valid — if so, it invokes the Lambda function. The function makes one or more requests to the database and executes some business logic. The result of the function is then transformed into an HTTP response by API Gateway.We now have an environment to run our microservices.Logs, Metrics, and AlertingA Blackbox is very hard to operate. That's why we need as much information from the inside of the system as possible. AWS CloudWatch is the right place to store and analyze this kind of information:Metrics (numbers).Logs (text).CloudWatch also lets you define alarms on metrics. The following figure demonstrated how the pieces work together.Operational insights that you get out-of-the-box:Lambda writes STDOUTand STDERR to CloudWatch logs.Lambda publishes metrics to CloudWatch about the number of invocations, runtime duration, the number of failures, etc.API Gateway publishes metrics about the number of requests, 4XX and 5XX Response Codes, etc.DynamoDB publishes metrics about consumed capacity, the number of requests, etc.The following figure shows a CloudWatch alarm that is triggered if the number of throttled read requests of the Location Service DynamoDB table is bigger or equal to one. This situation indicates that the provisioned capacity is not sufficient to serve the traffic.With all those metrics and alarms in place, we now can be confident that we receive an alert if our system is not working properly.SummaryYou can run a high-quality system on AWS by only using managed services. This approach frees you from many operational tasks that are not directly related to your service. Think of operating a monitoring system, a log index system, a database, virtual machines, etc. Instead, you can focus on operating and improving your service's code.The following figure shows the overall architecture of our system:Serverless or FaaS does not force you to use a specific framework. As long as you are fine with the interface (a function with input and output), you can do whatever you want inside your function to produce an output with the given input. Read more

Driving, Surviving and Thriving Industry Disruption With APIs

A simple online shopping task or purchase used to be extraordinary. That was yesterday’s news. Today companies such as Rogers Communications put live NHL game content in the hands of a mobile user; General Motors gives drivers the ability to start, stop and understand diagnostic information even when remote from the vehicle; and L’Oréal keeps retail partner and shopper loyalty high with fully-integrated product stock and pricing information.These user engagement experiences have lasting effects. They impact the way we live, work and compete. Companies are no longer being compared to the competitors within their own traditional boundaries, but to all organizations that are delivering a great customer experience.Continuous Innovation Through APIsWhat we have learned through industry disruption is that those who compete and are digital disruptors in existing or new industries find new ways to continually innovate. The newly released API research report, “APIs and the Digital Enterprise: From Operational Efficiency to Digital Disruption” uncovered key areas digital disruptors are investing in to adapt and thrive in the new application economy.1. Release engaging apps faster: To deliver optimal digital experiences businesses need to get apps out faster, understand usage sooner and iterate more often. It is not about delivering the most features but the right features in today’s experience driven world. 84 percent of organizations are using or planning to use APIs to speed delivery of revenue enhancing apps.2. Unlock data silos to improve customer experience: Businesses need to tap in to useful data that is often hidden away in silos, accessed very little or never at all. The ability to use information to deliver a more engaging experience across channels based on consumer insights provides a much more convenient and personalized experience that keeps customers coming back. 85 percent of organizations are using or planning to use APIs to deliver new and better customer experiences.3. Expand reach through digital ecosystems: Successful digital organizations must find a way to open up information and expand digital ecosystems. The ability to effectively share information will allow digital disruptors to extend reach and access entirely to new markets. Understanding how to connect and share data will enable companies to capitalize on the network effect ecosystems offer. 84 percent of organizations are using or planning to use APIs to extend their digital reach.L’Oréal: How APIs Drove Consumer Goods InnovationAs the world’s largest cosmetics company, L’Oréal is dedicated to beauty. L’Oréal sought to modernize on a single ecommerce platform in the Americas, with the ability to support any combination of digital, retail and partner channels through a consistent set of APIs. L’Oréal witnessed a 5X increase in the number of application connections without having to expand development teams. In addition, L’Oréal was also able to tap in to the value of customer data, combining it with merchandise to deliver a better customer experience. This took shape with the L’Oréal Makeup Genius app, which delivered an innovative omni-channel virtual experience of trying on makeup. Finally, L’Oréal was able to open and share product data with partners such as Target. This helped reduce operational costs, cutting down inventory overage, while ensuring shelves were filled with accurate pricing. This all was possible through APIs.Why API ManagementSuccessful organizations such as L’Oreal have found API Management necessary for their API projects. The right API Management solution should effectively allow organizations to create APIs and integrate with data, secure and mitigate the risk of APIs, accelerate the development of mobile and IoT apps connecting to APIs and unlock the value of data by engaging in new digital ecosystems. See www.ca.com/api to learn more how CA API Management can help. Read more

Democrat gun control sit-in sparks social media sensation

A blackout of television cameras in the U.S. House Representatives during the Democrats' gun control sit-in may have spurred public interest in the protest as it forced the demonstrators to use social media to broadcast their message.Democrats leapt on Facebook Live and Twitter's Periscope after the cameras, controlled by the House, went dark Wednesday when presiding House officer and Republican Representative Ted Poe declared the chamber not in order during the protest.As Democrats took to alternative forms of video broadcasting, their message gained tremendous momentum from social media. On Twitter, the hashtags #NoBillNoBreak and #HoldTheFloor have been tweeted at least 1.4 million times.Of the roughly 20 members of Congress who remained at the sit-in overnight, 19 of them used Facebook Live for a total combined viewership of 3 million.“It really connected with people out there,” Congressman Scott Peters told Reuters. "This whole phenomenon with [live video] struck a nerve."Peters used the application Periscope, which is connected to the social media platform Twitter, to send out video. “Without that, think about it, it would have been a caucus meeting where we talk to ourselves," he added. In remarks Wednesday outside the Capitol, House Democratic Leader Nancy Pelosi praised how her party harnessed social media."Without you and without the technology of Periscope [the sit-in] would just be a debate in the Halls of Congress unrecorded because they turned off the microphones," Pelosi said. "But we raised our voices. They turned off the cameras and we went to Periscope." Congressman Mark Takano, who began posting live videos from the chamber to his Facebook page Wednesday afternoon and continued to throughout the night, said the social media video helped him connect with constituents."Once I got started with the live streaming I didn’t feel like I could let down the people who were following me,” said Takano. “It was a way to push out a message.”Even C-SPAN, which typically broadcasts footage recorded by the House cameras, picked up live video from four different members of Congress roughly two hours after the House cameras shut down, according to communications director Howard Mortman. It marked the first time the channel broadcast a live social media feed from the House floor. "Something interesting is happening with Facebook Live that's bringing more openness to the political process," said Mark Zuckerberg, CEO of Facebook, in a post to his social media profile Thursday."It's a way to share anything you want with the world using just your phone." (Reporting By Amy Tennery; additional reporting by Angela Moon in New York and Susan Cornwell in Washington; Editing by Andrew Hay) Read more

Older Post