Saturday 30 June 2018

Buddy by Blue Frog




















With it’s a cheerful and sweet little face to express emotion, the Buddy by Blue Frog Robotics is the companion robot for the family. The fully mobile robot moves with two motorised wheels. With its sensors, cameras, and microphones, the robot is able to hear and speak. It is full of personality and reacts to its environment through a range of expressions (such as happiness, grumpiness, anger or sadness) that allow it to better engage with his family. The robot can help the family with daily tasks, protect the home, entertain the children, and interact with other smart connected devices in the home.

Friday 29 June 2018

Honda's new 3E (Empower, Experience, Empathy) Robotics.


Better known for its ASIMO humanoid robot, Honda unveiled a family of robots under its new 3E (Empower, Experience, Empathy) Robotics Concept, which demonstrates the company’s vision of a society where robotics and AI (artificial intelligence) can assist people in a multitude of situations, from disaster recovery and recreation to learning from human interaction to become more helpful and empathetic. The family includes the 3E-A18, a companion robotics concept that shows compassion to humans with a variety of facial expressions; 3E-B18, a chair-type mobility concept designed for casual use in indoor or outdoor spaces; 3E-C18, a small-sized electric mobility concept with multi functional cargo space; and 3E-D18, an autonomous off-road vehicle concept with AI designed to support people in a broad range of work activities.

Wednesday 27 June 2018

Ubtech Robotics’ Walker.




Ubtech Robotics’ Walker is the world’s first commercialised biped (walking on two legs) robot for the consumer market, offering a complete “home butler” service. The robot is designed to provide smart assistance and support across a wide range of people’s daily lives. Activated by voice or via a touch screen, it can perform a variety of functions for the home; including smart home control, video surveillance monitoring, security patrol monitoring, motion detection, instant alarm, video calls/conferencing, real time email integration, calendar/schedule management, playing music and videos and dancing. New functional arms and a variety of interactive control features are being developed.

The Aeolus Robot


The Aeolus Robot is one of the first household robot assistants. Equipped with an agile arm, it is able to move household objects and can recognize and adapt to changing environments. The robot can learn, navigate and complete tasks independently. With several key features and functionalities such as recognition of thousands of items, it can pick up items off the floor and put them away in their proper storage areas. The robot can use a vacuum or a dry mop to clean floors, and continually adapts to unique home layouts and routines. With advanced sensory and bio metric technologies, it can recognize and differentiate between family members, the physical living space and household items.

How to control robots with brainwaves and hand gestures

Getting robots to do things isn’t easy: Usually, scientists have to either explicitly program them or get them to understand how humans communicate via language.
But what if we could control robots more intuitively, using just hand gestures and brainwaves?
A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.
Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.
By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.
The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.
This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we've been able to do before using only EEG feedback. By including muscle feedback, the gestures can be udes to command the robot spatially, with much more nuance and specificity.”
In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.
Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.
Meanwhile, the team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.
For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.
To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.
Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.
The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

Sunday 24 June 2018

Trends in Robot Accessories

The right accessories can boost the flexibility and reliability of robotic assembly operations.



Image result for robots




The efficiency and reliability of an industrial robot largely depend on its end-of-arm tooling (EOAT). However, we often forget that the interface between the end-effector and the robot arm also plays a critical role. Devices, such as tool changers, compensation units, and force and torque sensors, have a great influence on the robot’s performance, flexibility and fields of application.
Tool changers lend speed and flexibility to a robotic assembly line.Even an experienced operator requires 10 to 30 minutes to manually change the EOAT on a robot. With a tool changer, the same operation takes less than 10 to 30 seconds.
A tool changer consists of two components: a quick-change head, which is mounted on the robot arm, and a quick-change adapter, which is connected to the tool. The tool changer provides feed-throughs for electric, pneumatic and hydraulic connections between the arm and the EOAT. During a tool change, both components are automatically or manually coupled. When selecting a tool changer, engineers should look for a device with a low weight-to-force ratio, short change times, and precision-made energy-transfer points.
Integrating micro valves into a quick-change system can replace a complete valve terminal. At every cycle, the cylinder of the actuator will fill with compressed air, which saves a lot of time. If you are using a feeding hose that’s 3 meters long and 4 millimeters in diameter, micro valves will reduce air consumption by 90 percent. Instead of cable and wire bundles, just one pneumatic line for compressed air and power supply is required. Both lines can usually be fed through a center bore with a diameter of 12 millimeters, and they can be integrated into the arm of a SCARA robot.
To avoid problems during robotic insertion operations, compensation units provide some compliance between the end-effector and the robot arm. This prevents damage to the assembly or the EAOT and increases the reliability of the process. The latest compensation devices work without pneumatics. Compliance in two directions is adjusted via springs with adjustment screws. Compliance in three directions is adjusted via elastomeric elements. Because the units work without pneumatics, they are flat and particularly suitable for use in confined areas. Smooth-running roller guides compensate for smaller forces without stick-slip effects.
The latest trend in robotics is to equip the robot with force and torque sensors. Such sensors give the robot the ability to precisely adjust its path based on feedback. This is critical for obtaining consistent results in robotic grinding and finishing applications.

Saturday 23 June 2018

The Exoskeletons of the Future Might Just Be Comfortable Pants

When the term “exoskeleton” comes to mind, it’s easy to think about science fiction or bulky military enhancements. The idea of military driven by men with the strength of robots has conjured up dreams for decades. But what if the future of exoskeletons isn’t Iron Man or Master Chief, but grandma maintaining her self-reliance by getting some help with simple acts like walking up the stairs?
Exosuit
The soft exosuit uses a combination of sensors, including a hyper elastic strain sensor (1) and sensors around the wearer’s hip, calf and ankle (2)-(5), all secured by straps. Flexible membranes cover sensors and straps (6) 
“Soft” exoskeletons, developed by researchers at the Wyss Institute at Harvard University, certainly have military applications.  But these exoskeletons are pants. Comfortable pants. These pants would have a waist belt, two thigh pieces, and two calf straps. Cables are attached through to a motor placed on the wearer’s back in a backpack. The thigh pants monitor the human gait, which the researchers place into “human kinematics,” the study of the motion of multi-joint systems. The sensors use electro-myography, a procedure used to assess the health of muscles and the nerve cells that control them, and to determine the parts of the body that need the most assistance.


Participants in the experiments carried thirty percent of their body weight at three miles per hour on a treadmill, slightly faster than the average human walking speed of two and a half. They would walk normally, walk with the weight, and then walk with the weight and the exosuit. The efforts seemed to put a strain on the body that without the suit would be manageable but still noticeable. The results were also noticeable. Measuring the metabolism and strain of each participant, the researchers determined there was “a significant metabolic power reduction” while wearing the exosuit. The participants’ hips, knees, and ankles were also all working significantly less while the exosuit was on.


This wasn’t the Wyss Institute’s first foray into robotic wearable, they’ve also worked with exo gloves. And while their work has been put into a military milieu, it’s clear that the Institute has an eye towards commercial sales. “A key feature of exosuits is that if the actuated segments are extended the suit length can increase so that the entire suit is slack, at which point wearing an exosuit feels like wearing a pair of pants and does not restrict the wearer whatsoever.”

Friday 22 June 2018

Meet the Tiny Origami Robots That Will Unfold in Your Stomach

Science often has a way of catching up with science fiction. While it never claimed the fan fervour of Star Trek, the 1966 movie Fantastic Voyage offered a tantalizing promise: shrinking scientists down so they could extract information from a human body from the inside. MIT scientists haven’t gotten the shrinking figured out, but by making ingestible microscopic robots that can enter human organs, they’ve managed to get at least half of it down. The robots are based on the concept of origami. You take a capsule, and the robot unfolds itself into a more functional form. MIT first announced the development of these origami robots in 2014, with the hopes of making robots as small as possible. At the time, Daniela Rus, Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, talked about how gaining control of the geometry of the robots was crucial to the project: “We can do the sequencing, we have a lot more control,” she said.
Origami Robot
The construction of the robot 
That control is on full display in the latest developments from Rus and her team. “In our calculation,” says Shuhei Miyashita, a lecturer in electronics at the University of Yorkwith whom work Rus worked, “twenty percent of forward motion is by propelling water—thrust—and eighty percent is stick-slip motion,” a method of locomotion where the robot sticks to a surface through friction when it moves and slips free when it changes its weight distribution. “In this regard, we actively introduced and applied the concept and characteristics of the fin to the body design, which you can see in the relatively flat design.” Once inside the body, the robot unfolds to a rectangular design with accordion folds. Crucial to the robot’s success inside the body is the magnet at its core. Outside magnetic forces are used to steer the robot through intestines, and the magnet was also critical to the robot’s success in its test mission.
Origami Robot
The size is small enough to be easily ingested 
The team got pig intestines from Boston’s Chinatown and used them to construct a synthetic stomach, in which they placed a battery. It was a realistic situation; there were a reported 11,940 incidents between 2005 and 2014 of children under six swallowing batteries. After navigating the robot with outside magnetic fields, the robot used its central magnet to grab the battery. It’s a system which is very reliant on outside observation, which is what Rus and her team are hoping to work on next. They are working on adding sensors to the robot, redesign the robot so it can control itself. Watch the video. So while the chances of humans shrinking down to get things out of your body are slim, at least something with physical autonomy could be pulling batteries out some time soon.

MIT Device Makes You a Superhuman With a Third Hand

What if instead of one hand on each arm, you had two? That’s what Sang-won Leigh is trying to achieve with his Robotic Symbiotic wearable that he developed as a project for MIT’s Fluid Interfaces course. The robotic hand (described as a “body integrated programmable joints interface”) is equipped with 11 motors that can be rearranged and reprogrammed to satisfy different uses. The device can serve as a large extra finger or as an entire hand. It can sit below your wrist and clamp onto things to hold them still, or pick up objects while your fleshy human hand is left free to, I don’t know, play with your phone.
The Third Hand by MIT
The robotic hand is controlled separately from your real hand 
The neat thing about Leigh’s robotic hand is that the machine is controlled separately from your real hand. You can hold your hand completely still and move the robotic appendage, and vice versa. How? The device senses electrical signals sent to the muscle in your forearm. This muscle is not used to move your hand, but after practicing for a few hours, you can flex and move it in ways that will cause the robot hand to respond. 

You can also use the device as a joystick or trigger, actually manipulating it with your hand to control a computer interface or play a video game. While there certainly could be applications for people who have disabilities or were injured in combat, Leigh has something else in mind for his human-augmentation device. “A lot of people think about machine augmentation in terms of rehabilitation,” Leigh says, “But we envisioned it as an assistive technology that wasn’t just for people with challenges, but which could turn people with normal physiology into super humans.”

Wednesday 20 June 2018

IBM’s Librarian Bot Swaps Data Tapes Like a Madman

There’s no such thing as the cloud, the saying goes—there’s just other people’s computers. How intensely data-intense medium-sized companies—says, banks or insurance companies—can handle huge volumes of data on-site using an IBM system called TS 4500, which comes with a handy robot to fetch files for you.
IBM TS4500
Huge data storage enabled by one plucky machine
         The tape library in the data storage unit is meant to maximize the amount of 
1s and 0s a company can store in a small space, which is especially useful for organizations that must keep years of older data around. The IBM specs can house more than 16 petabytes (at 3:1 compression) in a 10-by-10-foot space and expand as needed. That’s at least 16 million gigs to organize and protect in roughly the area of a freight elevator, with all that hard storage existing on perhaps tens of thousands of tapes. It’s a job for the robots.

This Tactile Robot Diver Lets Explorers Feel Around Shipwrecks

Stanford engineers have built a robot scuba diver that’s just the sort of sturdy, sensitive sort that you need if you want to pick your way through a coral reef or a decaying shipwreck. Named Ocean One, the robot commands some of the most human like qualities of any remote-controlled machine designed to withstand ocean pressures at depths of hundreds of meters.
OceanOne
Ocean One transmits sensations back to its operator for delicate explorations 
The robot’s key traits: OceanOne is roughly person-sized (about 5 feet long), has binocular vision in a head that looks like Mega Man’s, and trawls the deep with the use of two arms and hands that send back haptic data to its operator. This allows a person controlling Ocean One with joysticks to feel, roughly, the weight of the objects the robot’s picking up. The computer scientist leading this project, Oussama Khatib, uses the word “avatar” to describe the relationship between the operator and Ocean One. Its seems apt, and it opens up a different frontier, perhaps, in how we explore the deep. More than a mere drone (no offense, Sentry) or a chunky human-occupied pod Ocean One is nimble, tactile, and seemingly intuitive. It can comb a fragile environment, in tight confines, and report its findings directly into the hands of its guide. Khatib told Stanford’s news service: “You can feel exactly what the robot is doing. It’s almost like you are there; with the sense of touch you create a new dimension of perception.”

 Its maiden voyage into the live ocean was to prowl a La Lune, a frigate of Louis XIV’s that sank off the southern coast of France in 1664 and was rediscovered only in 1993, some 300 feet below the surface. French underwater archaeologists have been probing the site for years, but it’s still striking to see OceanOne hustle down and retrieve a vase, with Khatib at the controls. No wonder the scientist hopes to build a small team of these robots in the near future. Watch the Ocean One.

Tuesday 19 June 2018

Drones Are Already Delivering Packages in Germany

Parcelocopter
The Parcelocopter completes five mile trips within eight minutes


It may have been a while since DHL made headlines in the States, but it’s been focusing on shipping, shipping, and shipping. Now, it’s sent over a hundred packages with drones—beating mammoths like Amazon to the punch—and has no plans to stop any time soon. The international shipper stayed local with its trial runs, delivering 130 packages within the Bavarian town of Reitim Winkl, a small town mainly known for its skiing and forestry with added challenges of alpine geography and the potential for snow. Residents could drop off packages in “packstations,” essentially lockers, from which point on drones would control the entire shipping process.


Focusing the drones—which DHL adorably refers to as “parcelcopters”—within a small area allowed for quick speeds. Used from January through March of this year, the drones were able to complete five mile trips that usually take half an hour during the winter within eight minutes. DHL thanked the German government, saying that the creation of a special restricted flight zone was crucial to the success of the project. It’s something Amazon would surely kill to get out the U.S. government. Some say that Amazon’s famously aggressive stature has actually worsened its chance at a speedy approval, leading to severe pushback from unions and safety advocates. The company has threatened that it would move its drone program overseas, closer to the friendly environs seen by DHL. Regardless of where Amazon’s technology goes, though, it’ll never be able to say it was first. 

Meet EMILY, the Robot Lifeguard Launched From a Helicopter

EMILY
                      
                           Swims through the water with the greatest of ease 

               
             If you thought bulletproof, jet-powered superheroes in colorful outfits exist only in the movies and comic books, then it’s time to reconsider. EMILY, a remote-controlled robot lifeguard, recently proved her value by rescuing some 300 Syrian refugees from drowning off the Greek island of Lesbos. The robot, full name Emergency Integrated Lifesaving Lanyard, is designed to be thrown out of a helicopter (or from a boat or a bridge) and then driven up to a person in the water. EMILY has propulsion similar to a jet ski with no propeller blades to cause injuries or get tangled. It zips along at a brisk 22 mph. Rough conditions are no problem; EMILY can handle 30-waves and survive collisions with rocks and reefs and keep going. “EMILY is made of Kevlar and aircraft-grade composites and is virtually indestructible,” says Tony Mulligan, CEO of marine robotics company Hydronalix and Emily’s inventor. EMILY is easy to spot thanks to its orange, red and yellow color scheme. It has lights for night rescues. A two-way radio system allows rescuers to talk to people in the water and see them via a video camera. While the bot’ most obvious use is an emergency flotation device for up to six people struggling in the water, the robot can also deliver life jackets, or can drag a rescue line 800 yards through surf or currents.


As with all superheroes, there is a cool origin story here. In Emily’s case this goes back to a 2001 project for a drone to track whales during Navy sonar testing.  In 2011, elements of the original drone were used to create a new machine for hurricane tracking and disaster response. Other components were incorporated from the Office of Naval Research’s mysterious SwampWorks program. The end result of this collaboration between Hydroanalix, the ONR, and the Navy’s Small Business Innovation Research is EMILY. Bob Smith of SBIR calls EMILY as “a classic overnight success story years in the making.” At just four feet long and twenty-five pounds, Emily may be a little on the small side for a superhero.  But unlike her fictional counterparts, she’s out there saving lives in the real world, with some 260 units in service with coast guards, navies and others, including the RoboticistsWithout Borders team that took Emily to Greece. Now that’s a marvel.

Monday 18 June 2018

Swarms of Robot Spiders Could Be the Future of 3D Printing

The problem with most 3D printer’s “microwave from the future” design is that they have a limited capacity, you can’t make anything larger than the printer itself just like how you can’t print out a full-size poster on a desktop inkjet. If you could train a group of robot spiders to work together collaboratively though, size limits could effectively disappear. That’s the theory behind the latest project coming out of Siemens, anyway. A few years past the 100th anniversary of the first assembly line in a Ford factory, the principle still rings true: You can make something faster with many hands working on small parts individually. “The idea,” says Livio Dalloro, head of the company’s Product Design, Simulation & Modelling Research and leader of the robot spider program, “is really to make these flexible autonomous, communicating, general purpose machines.”
Robot Spiders
To do that, Siemens has created six-legged robots with depth-perception cameras and awareness of their surroundings. Each robot will take its own size into account and work specifically in its designated area. They’ll even work in shifts so they can reload printing material and recharge, downloading data into a fresh robot after two hours. Mobile robots could simply be given a design for a project and break it up amongst themselves.At this point, the robots are too imprecise to build anything yet. The collaborative process is a complex one, even for robot spiders that are operated by a hivemind. Each bot requires an understanding of its own abilities and those of fellow builders in order to work efficiently as a group. Siemens isn’t building the robots towards any specific goal right now, but sees the bots as the future of construction. Dalloro’s team has already got one robot to build a small plastic object, now they’re working on getting two together to build with concrete. Unleash the swarms.

Sunday 17 June 2018

Disney’s Clever New Transmission Can Give Robots the Most Delicate of Touches

Disney’s Clever New Transmission Can Give Robots the Most Delicate of Touches

A Hybrid Hydrostatic Transmission and Human Safe Haptic Telepresence Robot
              The new robotic transmission allows robotic arms a greater level of sensitivity and precision

Disney has engineered a new robotic transmission that allows robotic arms a greater level of sensitivity and precision. These arms combine hydraulic and pneumatic lines and offer virtually no friction, allowing for remarkable levels of control. Normally, a robot joint would have two hydraulic cylinders, balanced against each other. But Disney researchers differed from the norm, matching each water-filled cylinder with an air-filled cylinder. The pneumatic cylinder acts as a constant force air-spring, providing the required pre-load force, allowing the joint to move in both directions while cutting the number of hydraulic lines in half.

The result is a very soft touch. Currently the robots have to be controlled by humans, who sometimes are wearing Oculus Rifts to see through the robot’s camera-eyes. Human control allows for a great chance to show off the variations of sensitivity in touch, and the team behind the transmission expects “the same level of mechanical performance once the motions are automated.”

Domino’s Delivery Robot Heats Your Pizza on Its Way to Your House

Dominos' pizza delivery bot
Maybe we should be worried about robots taking our jobs—Domino’s has enlisted the help of machines to deliver its pizzas as part of a trial in Australia.

The Domino’s Robotic Unit (DRU) is a ground drone that is able to navigate to the customer’s doorstep with the help of sophisticated on-board sensors.And here’s the best part—it keeps the food piping hot by warming it in its on-board oven along the way.Domino’s created the robot with the help of Marathon Robotics. The design was based on a military drone and its cargo hold is PIN-code protected. DRU is undergoing early testing in Australia, which will hopefully lead to these things being deployed further afield.After all, drone deliveries are almost a thing. Amazon and its competitors have been testing the waters for a good while. But what good is a drone if it doesn’t deliver pizza?

Saturday 16 June 2018

RoboBee Clings to the Ceiling With Static Electricity

                             This insect-sized flying robot is smaller than a quarter, 12 times lighter than a paperclip, and zips through the air with a pair of flapping wings. That’s not even the impressive part. Using a trick of electrostatic energy, the minuscule bot can efficiently cling to the underside of any flat surface, from tree leaves to glass skylights to your plaster ceiling. This electric-powered perching is almost effortless—it takes 1,000 times less energy than is needed to fly. The insectoid bot is called RoboBee, and was developed by a team of  at MIT. RoboBee’s wings beat almost as quickly as a real honeybee, flapping at the lightning pace of up to 120 beats per second.




Robobee
Anatomy of the Robobee 
To cling to the underside of any flat surface, including wood, glass, brick, stone, and metal, RoboBee initiates a perching manoeuvre, swooping up to a stable position right underneath the surface of wherever it’s trying to stick. On top of RoboBee is a bulls-eye shaped patch, attached with a polyurethane foam mount. That foam mount assures that lightweight RoboBee doesn’t bounce off as it swoops up to make contact with its target. Next, thin copper electrodes in the bulls-eye patch create a gentle tug of static electricity. RoboBee stays stuck as long as these copper electrodes produce this tiny amount of voltage. When finished perching, RoboBee smoothly detaches by cutting off the voltage, which cleanly eliminates the static cling and allows the bot to resume flight immediately. “One of the biggest advantages of this system is that it doesn’t cause destabilizing forces during disengagement, which is crucial for a robot as small as this. This perching takes anywhere between 500 and 1,000 times less energy than flying for RoboBee. That’s such a low power requirement that it’s easy to imagine how, if future versions of RoboBee had deployable solar panels, the robot could even recharge its batteries while taking a break. 
Robobee
The Robobee is the size of a cent 

Friday 15 June 2018

Amazon Echo vs. Google Home, A Battle of Virtual Assistants

The battle of the virtual assistants is officially on. Google Home is a voice-activated product that allows you and your family to get answers from Google, stream music, and manage everyday tasks. Sounds familiar?
Google Home vs Amazon Echo
Although launched late by a year Google Home is favoured to beat Amazon Echo 
Google Home, albeit a year or so late, is a direct response to Amazon Echo and other virtual assistants. Google Home, at first glance, looks like it will be quite useful, even superior to Amazon Echo in several ways. However, Amazon Echo has been around since 2014 and has sold more than 3 million units. 

Amazon Echo Plus: Amazon's Echo Plus can do everything a standard Echo can do, but it has better speakers and a built-in smart home hub. Normally, if you want to, say, control your Phillips Hue lights with an Echo, you have to teach it that "skill" — meaning, you go into the Alexa app and activate that particular ability before you can freely use it. In the case of the Echo Plus, it can detect and connect to stuff like that automatically. The Plus is a vision of the future of the Echo, and what Amazon's after: a connected smart home, controlled by voice, using Echo devices that are connected to smart lights and TVs and whatever else.
Google Home Max: If you're big into audio, the Google Home Max is for you. If you're not, the Google Home Max is very much not for you. This device is more of a speaker than a smart device, though it certainly does everything that all the other Google Home devices do. It's got Google Assistant built in, and you can order a pizza from Domino's (or whatever), but the real idea with the Home Max is to take on stuff like the Sonos Play 5 speaker and Apple's upcoming HomePod. To drive this point all the way home (get it?), Google introduced the Home Max with a video starring Diplo. If you don't know who that is, this speaker definitely isn't meant for you.

How the Internet of Things will reshape future production systems

                                            

Image result for internet of things


                                         For decades, many of the world’s best companies have used their production systems as a source of sustainable competitive advantage. But such a system isn’t just about doing things well, with fast, efficient manufacturing processes and consistently high quality. What differentiates organizations like Danaher or Toyota is their ability to improve those operations continually, at a pace their competitors struggle to match.


Strong production systems have other powerful benefits too. They give companies a clear, precise picture of their own performance, allowing direct comparisons among plants, for example, and encouraging internal competition. They provide a common culture, vocabulary, and tool set that facilitates the sharing of best practices while minimizing confusion and misunderstanding. And by developing the skills of existing staff and creating an attractive environment for talented new hires, they help people contribute to the best of their ability.



The best production systems are simple and structured, and built around a company’s specific strengths and challenges. That requires a good deal of self-knowledge. A company must not only understand what it wants to achieve but also identify the methods, resources, and capabilities it will need to get there. Ultimately, a good production system is a unique, bespoke management approach that’s difficult for competitors to copy.
Today, even the highest-performing companies can boost their performance still further. That technology-driven opportunity comes from data—specifically, the huge volumes of data on processes and performance generated by new generations of network-connected devices: the Internet of Things (IoT). To capture the opportunity, companies must revisit and reassess many of the processes and principles that have been so successful for them in the past.

Four dimensions of the IoT’s impact
The advent of IoT technologies—and the more general move to digital tools that support operations, communication, analysis, and decision making in every part of the modern organization—won’t change the fundamental purpose of production systems. It will, however, transform the way they are built and run, offering improvements across four main dimensions:
  • Connectivity
  • Speed
  • Accessibility
  • Anchoring

Connectivity

Traditional production systems embody a collection of separate tools bound together loosely by the rules governing their application. Usually, these rules are at best defined only on a paper document or a corporate intranet site. In the future, such links will be much tighter and more automated, and fast digital connections will allow the whole system to operate as a seamless, cohesive whole.
                   Integration will change production systems in two ways: 
First, performance measurement and management will be based on precise data. Sensors will monitor the entire production process, from the inspection of incoming materials through manufacturing to final inspection and shipping. Companies will store the output of those sensors in a single, central data lake, together with a host of additional data from other internal sources, as well as external ones (supplier specifications, quality indicators, weather and market trends). All these strands of data will combine to set the production system’s targets and measure its performance continually, so the staff will be able to see, at a glance, if the system is performing as it should.
Second, connectivity will support better fact-based decision making. Access to comprehensive, up-to-date production information, together with a complete historical picture, will take the guesswork out of changes and improvement activities. As the collection and reporting of data are increasingly automated, front-line operators and managers will play a larger role in solving problems and improving processes. Root-cause problem solving will be easier.Aided by advanced analytical techniques, staff will be able to identify the changed operating conditions that precede quality issues or equipment failures. Furthermore, stored information about similar issues solved elsewhere will help identify appropriate solutions.

Speed

Today’s production systems are necessarily retrospective. While they aim to maximize responsiveness by emphasizing discipline, standards, and right-first-time practices), the reality falls short. Manual measurement and management mean that most opportunities for improvement cannot be identified until a shift ends and the numbers come in.
With the introduction of comprehensive, real-time data collection and analysis, production systems can become dramatically more responsive. Deviations from standards can immediately be flagged for action. The root causes of those deviations can therefore be identified more quickly, as will potential countermeasures. The entire improvement cycle will accelerate.
It isn’t just the management of day-to-day operations that will get faster. Capability building will, too, thanks to focused, online training packages customized to the specific needs of individual employees. Finally, IoT technologies will speed improvements in the production system itself—for instance, by automatically identifying performance gaps among plants or updating processes throughout the company whenever new best practices are identified.

Accessibility

Back-end data storage isn’t the only thing that will be unified in the production systems of the future. So will access. Staff at every level of the organization will get the tools and data they need through a single application or portal . That portal will be the organization’s window into the system’s dynamic elements—especially minute-by-minute performance data—as well as more static parts, such as standards, improvement tools, and historical data.
These portals—with responsive, customized interfaces ensuring that the right employees get access to the right information and tools at the right time—will simplify and accelerate the operation of the production system. If it identifies a deviation on a production line, for example, it will be able to alert the team leader, show current and historical data on that specific process, and offer appropriate root-cause problem-solving tools, together with a library of solutions applied elsewhere.
Using secure and tightly controlled interfaces, the production-system portal will also be accessible beyond the organization’s boundaries: it will allow suppliers to track consumption and quality issues in materials, for example, or external experts to review current and historical performance to find improvement opportunities. Using online support and predictive analytical tools, manufacturers of equipment will increasingly operate, monitor, and maintain it remotely. The portal will even allow companies to benchmark their own performance automatically against that of others.

Anchoring

One of the most powerful effects of IoT and digital technologies, we foresee, will be to anchor the production system in the organization’s psyche. This will overcome the most critical challenge many companies struggle with today: sustaining change, so that the organization improves continually.
That anchoring effect will be achieved in several ways. First, the unified data, interface, and tool set will not only help enforce the adoption of standards but also ensure that the right way of doing things is the easiest way. Staff won’t need to improvise production plans or override machine settings if the optimum settings are just a button click away.
Second, future production systems will help the organization to collaborate more effectively. An end-to-end view of performance will break down barriers among functions and ensure that decisions reflect the interests of the business as a whole. The communication and sharing of information will be greatly enhanced, since a central knowledge hub and social-media tools will let staff in one area access support, ideas, and expertise from another.
Finally, future production systems will make performance far more visible: when the whole leadership can see the direct link between operational performance and profitability, for example, the production system will no longer be considered the concern solely of the COO. Digital dashboards on computers, mobile devices, and even smartwatches will show staff in every function and at every level exactly how the organization is performing, as well as the precise value of the contribution of their businesses, plants, or production cells. The result will be genuine transparency—not just about where the value is being created, but also about how.

Adopting IoT: Early wins
Although the fully integrated digital production systems described in this article don’t yet exist, many of the building blocks are already in place. The oil-and-gas industry, for instance, is rolling out industrial-automation systems that can monitor the health of expensive capital assets in remote locations. These systems facilitate timely preventative maintenance by using sensor data to generate real-time performance information and provide an early warning of potential problems. Automakers already have production lines where hundreds of assembly-line robots are integrated with a central controller, business applications, and back-end systems. This technology helps companies to maximize uptime, improve productivity, and build multiple models (in any sequence) without interrupting production.

The next challenge for manufacturing companies is to complete the integration process. This will mean taking the tools and capabilities that now work on individual production lines or assets and extending them to the entire enterprise and then its entire supply chain. For companies that succeed, the reward will be greater efficiency, rich new insights, and dramatic, continual improvement in performance.

Thursday 14 June 2018

Once humans get along with machine: A new era of automation in manufacturing sector can happen

                      Over the past two decades, automation in manufacturing has been transforming factory floors, the nature of manufacturing employment, and the economics of many manufacturing sectors. Today, we are on the cusp of a new automation era. The rapid advances in robotics, artificial intelligence, and machine learning are enabling machines to match or outperform humans in a range of work activities, including ones requiring cognitive capabilities. 

Industry executives—those whose companies have already embraced automation, those who are just getting started, and those who have not yet begun fully reckoning with the implications of this new automation age—need to consider the following three fundamental perspectives: what automation is making possible with current technology and is likely to make possible as the technology continues to evolve; what factors besides technical feasibility to consider when making decisions about automation; and how to begin thinking about where—and how much—to automate in order to best capture value from automation over the long term.

How manufacturing work and manufacturing work forces can change?

To understand the scope of possible automation in the manufacturing sector as a whole, a study of manufacturing work has been conducted  in 46 countries in both the developed and developing countries, almost covering about the 80 percent of the entire global workforce. It shows that as of 2015, 478 billion of the 749 billion working hours (64 percent) spent on manufacturing-related activities globally can be automatized with current available technology. Even though manufacturing is one of the most highly automated industries globally, there is still significant automation potential within the four walls of manufacturing sites, as well as in related functional areas such as supply chain and procurement.

Blog Archive