Friday 29 August 2014

Run Android apps on your Windows PC

Clash of Clans, running in BlueStacks on a Windows PC

 After a bit of a slow start, Android’s application ecosystem has proven to be versatile and very developer-friendly. You are free to develop an app for Android and publish it to the Play Store with minimal restrictions. This has led to a plethora of really cool Android apps, some of which aren’t available on iOS or other platforms. Running Android apps usually requires an Android smartphone or tablet — obviously! — but what if you currently use iOS or another mobile OS, and want to try out Android without actually getting an Android device?

 Well, fortunately, with a little leg work, you can run Android apps on a regular old Windows PC. There are a few different ways to go about it, each with their own strengths and weaknesses.

Android Emulator 


The Android emulator

The most “basic” way to get Android apps running on a PC is to go through the Android emulator released by Google as part of the official SDK. The emulator can be used to create virtual devices running any version of Android you want with different resolutions and hardware configurations. The first downside of this process is the somewhat complicated setup process.

You’ll need to grab the SDK package from Google’s site and use the included SDK Manager program to download the platforms you want — probably whatever the most recent version of Android happens to be at the time (4.4 at the time of publishing). The AVD manager is where you can create and manage your virtual devices. Google makes some pre-configured options available in the menu for Nexus devices, but you can set the parameters manually too. Once you’ve booted your virtual device, you’ll need to get apps installed, but the emulator is the bone stock open source version of Android — no Google apps included.

Since there’s no Play Store, you’ll need to do some file management. Take the APK you want to install (be it Google’s app package or something else) and drop the file into the tools folder in your SDK directory. Then use the command prompt while your AVD is running to enter (in that directory) adb install filename.apk. The app should be added to the app list of your virtual device.

The big upside here is that the emulator is unmodified Android right from the source. The way apps render in the emulator will be the same as they render on devices. It’s great for testing app builds before loading them onto test devices. The biggest problem is that the emulator is sluggish and lacks the hardware access to make games run acceptably.

androidx86   


Android PC ports

If you don’t mind a little extra work, you can have a more fluid Android app experience by installing a modified version of the OS on your PC. There are a few ports of Android that will run on desktop PCs, but support is somewhat limited because of the extensive hardware configuration options for PCs. The two leading choices for a full Android installation on PC are Android on Intel Architecture (UEFI-equipped devices) and the Android-x86 Project (pictured above).

 Neither one is in a perfect state, and you’ll need a supported piece of hardware, like the Dell XPS 12 for Intel’s version or the Lenovo ThinkPad x61 Tablet for Android-x86. You could install them over top of Windows, but that’s not the best idea. The smarter way would be to create a separate hard drive partition and install Android there.

 If your hardware isn’t supported by either of these projects, you can also try installing them in VirtualBox, which should be a little faster than the official Android emulator. It probably still won’t be good enough for games, but most apps should install and run correctly. Also note, you’ll have to install the apps you want manually as there’s no Google Play integration here either.



BlueStacks App Player

If you’re looking to get some apps and games up and running on your computer with the minimum of effort, BlueStacks is your friend. The BlueStacks App Player presents itself as just a way to get apps running, but it actually runs a full (heavily modified) version of Android behind the scenes. Not only that, but it has the Play Store built-in so you have instant access to all of your purchased content. It actually adds an entry to your Google Play device list, masquerading as a Galaxy Note II

The BlueStacks client will load up in a desktop window with different app categories like games, social, and so on. Clicking on an app or searching does something unexpected — it brings up the full Play Store client as rendered on tablets. You can actually navigate around in this interface just as you would on a real Android device, which makes it clear there’s a lot more to BlueStacks than the “App Player” front end. In fact, you can install a third-party launcher like Nova or Apex from the Play Store and set it as the default. The main screen in BlueStacks with the app categories is just a custom homescreen, so replacing it makes BlueStacks feel almost like a regular Android device.

Bluestacks 
BlueStacks App Player, in the Play Store 

Having full Play Store access means you won’t be messing around with sideloading apps, and BlueStacks manages to run everything impressively. Most games are playable, but keep in mind you’ll have trouble operating many of them with a mouse. If your PC has a touchscreen, you can still use apps and games that rely on more than one touch input. BlueStacks can essentially make a Windows 8 tablet PC into a part-time Android tablet. BlueStacks calls the technology that makes this possible “LayerCake” because Android apps run in a layer on top of Windows.

The only real issue with BlueStacks is that it’s not running a standard Android build. All the alterations the company made to get apps working on a PC can cause issues — some apps simply fail to run or crash unexpectedly. This customized environment is also of little value as a development tool because there’s no guarantee things will render the same on BlueStacks as they might on a real Android device without all the back end modifications. It’s also a freemium service with a $2 pro subscription, or you can install a few sponsored apps.


So what’s the best way?

If you’re interested in getting apps running on your PC so you can actually use and enjoy them, BlueStacks App Player is the best solution. It’s fast, has Play Store access, and works on multitouch Windows devices.
If you need to test something with the intention of putting it on other Android devices, the emulator is still the best way to give builds a quick once-over on a PC before loading them on to Android phones or tablets. It’s slow but standardized, and you’ll be able to see how things will work on the real deal. The Android PC ports are definitely fun to play with, and performance is solid when you get apps running, but they can be finicky. If you just want to play Clash of Clans on your Windows machine, get BlueStacks.
Extremetech
   




ET deals: FreedomPop offers Samsung Victory 4G LTE smartphone with free voice, data for $80


freedompop-samsung-galaxy-victory-cpo
FreedomPop, the purveyor of free high-speed internet, is back with something different – the lowest cost 4G LTE mobile phone plan we’ve ever seen. Today they’re offering huge savings on a certified pre-owned Samsung Victory LTE smartphone, bundled with a free trial of one of their premium plans, all for just $80. We should point out that if you don’t want to, you never have to pay again and you’ll still get voice and data service, meaning you could potentially save hundreds off your regular phone bill.

The Samsung Victory is a capable device with reasonable specs for a lower-end smartphone, making it a great choice for first-time smartphone buyers or anyone with more casual needs. It features a four-inch 800×480 touchscreen display, a 1.2GHz dual-core Snapdragon S4 Lite processor, dual-facing cameras (5MP rear, 1.3MP front) and Android 4.1, giving it plenty of power for light gaming, video chats, Google Now usage, and more. There’s also 4GB built-in storage that’s expandable via microSD, NFC, and of course, LTE connectivity.

  Basic access to FreedomPop’s plan starts at precisely zero dollars per month. That’s right, it’s completely free. The basic plan affords you 200 minutes, 500 texts, and 500MB of LTE data per month, which will require light usage but provides plenty of value considering the cost. As part of this bundle, FreedomPop is giving you a free one-month trial of their unlimited talk and text plan, which is just what it sounds like, so you can talk and text freely to determine what plan you’ll really need. If you’re a heavier data user, there’s also an option for unlimited data (the first 1GB is LTE, after that it’s 3G).

FreedomPop operates on Sprint’s network, so you’ll need to check the coverage map and make sure you have access to Sprint’s 3G or LTE services before you get started. FreedomPop’s pricing is already far below the traditional carriers (including Sprint), but they also offer things like “Social Broadband,” where you gain more data for referring your friends, and the option to complete offers to get even more free data. If you’re determined not to spend any money on their service and still get tons of data, you can find a way. We should note that there are no contracts or commitments here either.

freedompop-samsung-galaxy-victory-size 

All in all you’re getting the certified pre-owned Galaxy Victory ($239.99 value), a free month of their unlimited talk and text plan ($10.99 value), and free shipping ($14.99 value), all for just $79.99. That’s a sizable 70% off, and FreedomPop offers a 30-day money back guarantee as well as a 90-day warranty on the phone. Take advantage of this bundle today and see if this is a service that can work for you.
Samsung LTE Victory smartphone (certified pre-owned) with unlimited talk and text, and 500MB data for $79.99 with free shipping. Get $185.98 savings instantly.
Extremetech
  

The American Midwest: Traveling where the cloud can’t follow

Cloud Failure
Every year, my dad’s relatives get together for a family reunion. I love visiting with my family, but it comes with one major downside: the location. You see, my dad’s family all live in rural Ohio and West Virginia — where the cloud goes to die. Not only is cell coverage spotty, but residential internet access isn’t much better. Speeds are generally abhorrent, and you’re lucky if you can maintain a connection for an entire day in some parts of town. Against my better judgement, I decided to try to stream all of my entertainment during my most recent trip. Here’s how it went down.

  The trip to Ohio is relatively painless, and I had a strong LTE connection for most of my time spent on the Pennsylvania Turnpike. It wasn’t until I left the major highways in West Virginia that cell coverage became a real problem. As you can see from Verizon’s official coverage map (below), large swaths of West Virginia and Ohio are white — meaning no coverage. In some cases, there wasn’t even enough signal for simple text messages to send properly. Trying to stream video or audio over a connection like that is a fool’s errand.

After a few failed attempts in the car, I decided to wait until I got settled into the hotel before I started testing my streaming services in earnest.

Verizon Coverage 


A failure to communicate

Now, hotels aren’t exactly known for having the world’s most reliable internet connections, but I figured it was my best shot since it was in a city. My family lives about 30 minutes from the hotel, and it’s nothing but farm land and mountains out there. In other words, this was my chance. I connected my tablet, smartphone, and Vita to the hotel WiFi, and I started testing the connection. It seems that the average download speed was a little bit less than 1Mbps, and the upload speed was an abysmal 123Kbps. Even worse, the ping to the closest SpeedTest.net server was 100ms, so the round trip to my FiOS connection 300 miles away in Delaware was even worse. At that point, I knew for sure that any sort of fast-paced game would be impossible to control.

Speed Test in WV

To start my night of frustration off right, I attempted to connect to my PS4 over Remote Play. I opened the app, hit connect, and waited patiently for about a minute. It searched the local network first, and then checked with Sony’s servers to find the address of any paired PS4. It found my console, and began the process of connecting, but it bombed out after checking the “connection environment.” I couldn’t even get it to the main menu. The Vita just refused to finish the connection process — likely due to the bandwidth constraints.

Vita Cannot Connect 

I switched my Vita over to a Verizon MiFi with a slightly faster connection, but it was still largely unplayable. Sure, the latency was bad, but the severe artifacting made reading some of the in-game text nigh-on impossible. After about a minute of running around in Watch Dogs, that connection dropped as well. Compared to my experience with the strong LTE connection in my home state, this was a complete failure.


See no evil, hear no evil

After that, I tried to stream music over Spotify and Plex. Across both internet connections, the music streaming worked, but not as well as I’d like. The music would play, but hiccups were frequent. Buffering in the middle of your favorite jam is never a good time.

Video, however, was an even worse experience. My Plex server was configured to stream 720p video out at 3Mbps, but I knew that wasn’t going to fly here. The hotel WiFi was too slow for that, so I cranked it down to a 720Kbps video feed. I selected an episode of Good Eats, and waited for it to load. After about a minute, it started playing, but it stopped to buffer a few minutes later. I let it spin, watched another few minutes, and then it needed to buffer again. I couldn’t take it.

Plex Settings 
I bailed out of my TV show, and went into the Plex settings. The next option down was 320Kbps — an extremely low bitrate. I selected it, and tried watching my cooking show again. This time it played without stopping to buffer, but it was a blurry mess. Just like the streaming experience on the Vita, any text on the screen was just too mangled to read. It did work, but the experience was completely miserable.

The next day, I drove a half-hour to get to the reunion. Up on the mountain, the situation was even worse. My signal was so bad that I couldn’t load a web page, and other internet options were nearly as useless. Most of my family members there simply don’t have an internet connection, and the one that does only has access over satellite. As someone who suffered with a satellite internet connection for years, I can assure you that it’s a miserable way to surf the web. Even if you disregard the slow speeds and ridiculous latency, the oppressive bandwidth caps make streaming any substantial amount of video completely impossible.


Cloud-free Ohio

Once you get outside of the densely populated areas here in the US, the utility of the cloud completely falls apart. I went into this expecting nothing to work, and that’s what I got. Luckily, I get to go home. I get to stream all day long, and download whatever I want whenever I want. Unfortunately, my family members in rural Ohio aren’t so lucky. There’s no Netflix, Hulu, or PS Now in their lives. With the severe usage limits in place, even basic cloud storage, like Dropbox or Google Drive, is out of the question for some of them.
Extremetech
   
 
 

DARPA’s tiny implants will hook directly into your nervous system, treat diseases and depression without medication

Stanford's mid-field wirelessly powered microimplant, the size of a grain of rice

DARPA, on the back of the US government’s BRAIN program, has begun the development of tiny electronic implants that interface directly with your nervous system and can directly control and regulate many different diseases and chronic conditions, such as arthritis, PTSD, inflammatory bowel diseases (Crohn’s disease), and depression. The program, called ElectRx (pronounced ‘electrics’), ultimately aims to replace medication with “closed-loop” neural implants, which constantly assess the state of your health, and then provide the necessary nerve stimulation to keep your various organs and biological systems functioning properly. The work is primarily being carried out with US soldiers and veterans in mind, but the technology will certainly percolate down to civilians as well.

The ElectRx program will focus on a fairly new area of medical therapies called neuromodulation. As the name implies, neuromodulation is all about modulating your nervous system, to improve or fix an underlying problem. Notable examples of neuromodulation are cochlear implants, which restore hearing by directly modulating your brain’s auditory nerve system, and deep brain stimulation (DBS), which appears to be capable of curing/regulating various conditions (depression, Parkinson’s) by overriding erroneous neural spikes with regulated, healthy stimulation.

  The Alpha IMS retinal prosthesis, implanted in a human patient
A state-of-the-art retinal implant and its controller/battery. Current implants are not particularly small things.

So far, these implants have been fairly big things — about the size of a deck of cards — which makes their implantation fairly invasive (and thus quite risky). Most state-of-the-art implants also lack precision — the stimulating electrodes are usually placed in roughly the right area, but it’s currently very hard to target a specific nerve fiber (a bundle of nerves). With ElectRx, DARPA wants to miniaturize these neuromodulation implants so that they’re the same size as a nerve fiber. This way they can be implanted with a minimally invasive procedure (through a needle) and attached to specific nerve fibers, for very precise stimulation.


While these implants can’t regulate every condition or replace every medication — at least not yet — they could be very effective at mitigating a large number of conditions. Basically, a large number of conditions are caused by your nervous system misfiring — most notably inflammatory diseases, but also potentially brain and mental health disorders. Currently, a variety of drugs are used to try and cajole these awry neurons and nerves back in-line by manipulating various neurotransmitters — but the same effect could be created with an electronic implant that “catches” the misfire, cleans up the signal, and then retransmits it.

DARPA's ElectRx program
DARPA’s ElectRx program

“The technology DARPA plans to develop through the ElectRx program could fundamentally change the manner in which doctors diagnose, monitor and treat injury and illness,” says DARPA’s Doug Weber. “Instead of relying only on medication — we envision a closed-loop system that would work in concept like a tiny, intelligent pacemaker. It would continually assess conditions and provide stimulus patterns tailored to help maintain healthy organ function, helping patients get healthy and stay healthy using their body’s own systems.” [Read: Brave transhumanist pioneer self-implants a computer into his arm.]

Despite requiring a lot of novel technological breakthroughs, DARPA is planning to perform human trials of ElectRx in about five years. The initial goal will be improving the quality of life for US soldiers and veterans — though there’s no word on which condition DARPA will focus on. Something “simple” like arthritis is most likely, but I’m sure there’s a lot of interest in curing/regulating post-traumatic stress disorder (PTSD) as well. Earlier in the year, DARPA announced a similar program to develop a brain implant that can restore lost memories and experiences.

While DARPA’s ElectRx announcement is purely focused on the medical applications of miniature neural implants, there are of course a variety of other uses that might arise from elective implantation — both for soldiers, but also for civilians. With a few well-placed implants on your spine, you could flip a switch and ignore any pain reported by your limbs, allowing you to push your body harder and faster. With precision-placed implants around the right nerve fibers, you could gain manual control of your organs — you could slow down or speed up your heart, turbo-charge your liver, or tweak just about any other function of your body. Transhumanism here we come.
Extremetech   

The car designer who turned a sailfish into a supercar

How would your boss react if he had to sign off for an expensive stuffed fish you’d bought on a whim on your holiday? Most of us would probably answer “not very thrilled”. But Frank Stephenson’s boss is not your average boss; his workplace is not your average workplace; and the fish? Well, that’s not your normal fish either.

 Stephenson is design director of McLaren Automotive, the carmakers behind a range of highly prized, high-priced cars. While on holiday in the Caribbean, he noticed a sailfish on a wall in the resort where he was staying. A man working there told Stephenson that he was proud to have caught the fish because it was so fast. Stephenson was intrigued – he began doing some research on the species to find out why it was so quick.

 

 

The McLaren P1 hybrid supercar uses design tricks inspired by the sailfish’s skin (McLaren)

 

On the way back to London, Stephenson stopped off in Miami and went down to a local fishing village, where, in a stroke of luck, a local fisherman had just caught a sailfish. He bought it, sent it downtown to get it stuffed and eventually got it delivered to the scanning department of the McLaren Automotive aerodynamics laboratory in Surrey – where the carmakers set to work trying to learn the secrets of the super-speed fish’s abilities. It’s just one of a number of recent initiatives by automotive companies to try and learn from techniques that have been used in nature.
The sailfish is a kind of turbo swordfish; one that has been clocked swimming 100m in around half the time it takes Usain Bolt to run it. They are capable of these bursts of breath-taking speed in order to chase down the small, fast-swimming fish they eat. The analysis revealed that the scales on the sailfish’s skin generate little vortices that result in the fish being enveloped in a bubble of air instead of denser water. This reduced drag allows the fish to move even faster.

Evolutionary design
McLaren’s designers applied the same texture as the scales of the sailfish to the inside of the ducts that lead into the engine of their P1 hypercar. This increased the volume of air going into the engine by 17%, improving the car’s efficiency: the P1 has hybrid engines creating 903 horsepower and thus needs large amounts of air pumped into the engine to help combustion and engine cooling.
The P1 also borrowed from the sailfish little ‘diplets’ on the torso of the fin where it meets the tail fin that the fish uses to straighten out the flow of pockets of air and water that move past it. This, Stephenson says, made the car more aerodynamic.



Stephenson's sailfish now has pride of place on the McLaren design studio wall (McLaren)


Nature has had millions of years to evolve its designs, says Stephenson, which is why he is trying to incorporate those tricks into his projects. “It just simply makes sense,” he says. “How can a lizard go upside down and stick to the surface for example? Well, if you can find out why, you can just apply that technology to the tyres of the car such that when the surfaces are wet, there’s no way that the car can slide.”
Hydrodynamic principles have often been refined by nature long before humans discovered them. For example, engineers from Mitsubishi Heavy Industries have recently developed an air lubrication system that separates water from the surface of ships to reduce drag up to 80%; gas is not as dense as water, so the ships has less friction and glides more easily. This is a similar technique to that used by the sailfish. And it turns out it’s just as useful on land as it is in the water.

Boxfish brainwave
The McLaren design studio is a mix of science lab, art workshop and music rehearsal space.  Its aim is to help designers gain a cutting edge, by taking them out of their comfort zone. “Everyday I’m doing research in my office, it looks like I’m studying biology in my office rather than car design,” says Stephenson.
McLaren have some company in this biomimicry department. Mercedes Benz’s bionic car was inspired by a boxfish with hexagonal plates on its outer skin which is like a “suit of armour”, giving the fish rigidity, protection and manoeuvrability. The cube-shaped structure doesn’t create drag; by mimicking this boxy shape, engineers were able to achieve a record-breaking drag coefficient – a value used to quantify the amount of resistance an object faces whilst trying to move through air or water. The Mercedes Benz engineers also managed to lower fuel consumption by 20%, and lessen nitrogen oxide emissions by 80%.

 
 The boxfish’s plate-like scales are another design copied by car manufacturers… (Wikimedia Commons)

Japanese carmaker Nissan, on their quest to create the ultimate collision-free vehicles, have created small “cute” robots called Eporo, which also feature tricks learned from creatures living beneath the waves. “We needed to look no further than Mother Nature to find the ultimate form of collision-avoidance systems in action, in particular, the behavioural patterns of fish,” says Toru Futami, the engineering director of advanced technology and research. The movement of fish, which move in schools and avoid obstacles en masse, is the elementary principle of the algorithm that the locomotion of the robots is built on. “This algorithm is very simple,” adds Susumu Fujita, the co-creator of Eporo.

“Fish follow these three rules: Don’t go too far, don’t get too close and don’t hit each other” explains Futami. The robots have already inspired features in Nissan cars including Intelligent Brake Assist (IBA) and Forward Collision Warning (FCW). These are the bases that the new autonomous cars could be built on, eliminating the need for lanes, traffic lights and even indicators. Even some car parts could become obsolete. “We have systems that are going to eliminate the need for windscreen wipers – you don’t see animals with windscreen wipers on their eyes,” asserts Stephenson.

 
 … In this case to create the shape of their new Bionic concept car (Mercedes-Benz)

Of course, not all inspiration comes from underwater. The Eporo, for example, contain Laser Range Finder (LRF) technology that was developed in a previous project from Nissan. It is inspired by the way a bumblebee can detect objects across a wide range, and can sense obstacles up to two metres away in a 180-degree radius. If one is detected, the robot is able to turn its wheels at right angles or greater to avoid collision.
Biomimicry has become a watchword for many car companies, but nature’s secrets are also having an effect off the road. Recently, Nasa also asked members of the public to select a spacesuit for the future, one of which  was inspired by “the scaly skin of fish and reptiles”.
Increasingly, designers are finding that nature’s tricks can surpass their own ideas. And so it makes sense to seek that inspiration from anywhere – even a stuffed sailfish on the wall of a resort in the Caribbean.
BBC
 

 

Google tests drone deliveries in Project Wing trials

Google has built and tested autonomous aerial vehicles, which it believes could be used for goods deliveries.
The project is being developed at Google X, the company's clandestine tech research arm, which is also responsible for its self-driving car.
Project Wing has been running for two years, but was a secret until now.
Google said that its long-term goal was to develop drones that could be used for disaster relief by delivering aid to isolated areas.
They could be used after earthquakes, floods, or extreme weather events, the company suggested, to take small items such as medicines or batteries to people in areas that conventional vehicles cannot reach.

"Even just a few of these, being able to shuttle nearly continuously could service a very large number of people in an emergency situation," explained Astro Teller, Captain of Moonshots - Google X's name for big-thinking projects.

Australia tests
Google's self-flying vehicle project was first conceived of as a way to deliver defibrillator kits to people suspected of having heart attacks. The idea was that the drones would transport the equipment faster than an ambulance could.
"When you have a tool like this you can really allow the operators of those emergency services to add an entirely new dimension to the set of tools and solutions that they can think of," said Dave Voss, incoming leader of Project Wing.

Project Wing
 The Project Wing trials have been held in Australia's north-eastern state Queensland

The prototype vehicles that the company has built have successfully been tested by delivering packages to remote farms in Queensland, Australia from neighbouring properties.
Australia was selected as a test site due to what Google calls "progressive" rules about the use of drones, which are more tightly controlled in other parts of the word.

Dual mode
Project Wing's aircraft have a wingspan of approximately 1.5m (4.9ft) and have four electrically-driven propellers.
The total weight, including the package to be delivered, is approximately 10kg (22lb). The aircraft itself accounts for the bulk of that at 8.5kg (18.7lb).
The small, white glossy machine has a "blended wing" design where the entire body of the aircraft provides lift.
The vehicle is known as a "tail sitter" - since it rests on the ground with its propellers pointed straight up, but then transitions into a horizontal flight pattern.

This dual mode operation gives the self-flying vehicle some of the benefits of both planes and helicopters.
It can take off or land without a runway, and can hold its position hovering in one spot. It can also fly quickly and efficiently, allowing it to cover larger distances than the more traditional quadcopter vehicles available commercially.
The vehicles are pre-programmed with a destination, but then left to fly themselves there automatically.
This differs from many military drone aircraft, which are often remotely controlled by a pilot on the ground, sometimes on the other side of the world.
Eventually Google said it could use unmanned flying vehicles to deliver shopping items to consumers at home. That's a use that retail giant Amazon has already stated an interest in, with its proposed Prime Air service - the announcement of which generated headlines at the end of last year:
Amazon has asked the US Federal Aviation Administration for permission to conduct outdoor tests.



Project Wing
Google would not be permitted to carry out the Project Wing tests in the US


"The things we would do there are not unlike what is traditionally done in aerospace," said Mr Voss.
"It will be clear for us what level of redundancy we need in the controls and sensors, the computers that are onboard, and the motors, and how they are able to fail gracefully such that you don't have catastrophic problems occurring."
Other unusual vehicles have been investigated for humanitarian aid, including flying cars and hoverbikes, with the same aims of reaching cut-off areas quickly.
"We will have to see what kind of specific technology works best within the aid landscape, and if the new technology can integrate positively in the local context," said Lou Del Bello from news site SciDev.net, speaking about the category in general.
"It will need to demonstrate it can be cost effective, and respond to actual needs of local people."
 BBC

What is OpenStack

OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds. Backed by some of the biggest companies in software development and hosting, as well as thousands of individual community members, many think that OpenStack is the future of cloud computing. OpenStack is managed by the OpenStack Foundation, a non-profit which oversees both development and community-building around the project.

Introduction to OpenStack

OpenStack lets users deploy virtual machines and other instances which handle different tasks for managing a cloud environment on the fly. It makes horizontal scaling easy, which means that tasks which benefit from running concurrently can easily serve more or less users on the fly by just spinning up more instances. For example, a mobile application which needs to communicate with a remote server might be able to divide the work of communicating with each user across many different instances, all communicating with one another but scaling quickly and easily as the application gains more users.
And most importantly, OpenStack is open source software, which means that anyone who chooses to can access the source code, make any changes or modifications they need, and freely share these changes back out to the community at large. It also means that OpenStack has the benefit of thousands of developers all over the world working in tandem to develop the strongest, most robust, and most secure product that they can.

How is OpenStack used in a cloud environment?

The cloud is all about providing computing for end users in a remote environment, where the actual software runs as a service on reliable and scalable servers rather than on each end users computer. Cloud computing can refer to a lot of different things, but typically the industry talks about running different items "as a service"—software, platforms, and infrastructure. OpenStack falls into the latter category and is considered Infrastructure as a Service (IaaS). Providing infrastructure means that OpenStack makes it easy for users to quickly add new instance, upon which other cloud components can run. Typically, the infrastructure then runs a "platform" upon which a developer can create software applications which are delivered to the end users.


What are the components of OpenStack?

OpenStack is made up of many different moving parts. Because of its open nature, anyone can add additional components to OpenStack to help it to meet their needs. But the OpenStack community has collaboratively identified nine key components that are a part of the "core" of OpenStack, which are distributed as a part of any OpenStack system and officially maintained by the OpenStack community.
  • Nova is the primary computing engine behind OpenStack. It is a "fabric controller," which is used for deploying and managing large numbers of virtual machines and other instances to handle computing tasks.
  • Swift is a storage system for objects and files. Rather than the traditional idea of a referring to files by their location on a disk drive, developers can instead refer to a unique identifier referring to the file or piece of information and let OpenStack decide where to store this information. This makes scaling easy, as developers don’t have the worry about the capacity on a single system behind the software. It also allows the system, rather than the developer, to worry about how best to make sure that data is backed up in case of the failure of a machine or network connection.
  • Cinder is a block storage component, which is more analogous to the traditional notion of a computer being able to access specific locations on a disk drive. This more traditional way of accessing files might be important in scenarios in which data access speed is the most important consideration.
  • Neutron provides the networking capability for OpenStack. It helps to ensure that each of the components of an OpenStack deployment can communicate with one another quickly and efficiently.
  • Horizon is the dashboard behind OpenStack. It is the only graphical interface to OpenStack, so for users wanting to give OpenStack a try, this may be the first component they actually “see.” Developers can access all of the components of OpenStack individually through an application programming interface (API), but the dashboard provides system administrators a look at what is going on in the cloud, and to manage it as needed.
  • Keystone provides identity services for OpenStack. It is essentially a central list of all of the users of the OpenStack cloud, mapped against all of the services provided by the cloud which they have permission to use. It provides multiple means of access, meaning developers can easily map their existing user access methods against Keystone.
  • Glance provides image services to OpenStack. In this case, "images" refers to images (or virtual copies) of hard disks. Glance allows these images to be used as templates when deploying new virtual machine instances.
  • Ceilometer provides telemetry services, which allow the cloud to provide billing services to individual users of the cloud. It also keeps a verifiable count of each user’s system usage of each of the various components of an OpenStack cloud. Think metering and usage reporting.
  • Heat is the orchestration component of OpenStack, which allows developers to store the requirements of a cloud application in a file that defines what resources are necessary for that application. In this way, it helps to manage the infrastructure needed for a cloud service to run.

Who is OpenStack for?

You may be an OpenStack user right now and not even know it! As more and more companies begin to adopt OpenStack as a part of their cloud toolkit, the universe of applications running on an OpenStack backend is ever-expanding.

How do I get started with OpenStack?

If you just want to give OpenStack a try, one good resource for spinning the wheels without committing any physical resources is TryStack. TryStack lets you test your applications in a sandbox environment to better understand how OpenStack works and whether it is the right solution for you.
OpenStack is always looking for new contributors. Consider joining the OpenStack Foundation or reading this introduction to getting started with contributing to OpenStack.

How do I follow progress in OpenStack?

Because OpenStack is not owned by one single company, getting information about OpenStack can be a little confusing. Opensource.com is working to help bring you up to date information about OpenStack in a format that helps provide answers to common questions from end users, developers, and decision makers seeking to deploy OpenStack at their organizations. To get started with this content, take a look at our OpenStack tag here on Opensource.com. Or check out the following articles which may help you to learn more about OpenStack and the community behind it:




The future of Linux: Evolving everywhere

Mark Shuttleworth's recent closure of Ubuntu Linux bug No. 1 ("Microsoft has a majority market share") placed a meaningful, if somewhat controversial, exclamation point on how far Linux has come since Linus Torvalds rolled out the first version of the OS in 1991 as a pet project.
Microsoft may not (yet) have been taken down on the quickly fading desktop, but the nature of computing has changed completely, thanks in large part to Linux's rise as a cornerstone of IT. There's scarcely a part of computing today, from cloud servers to phone OSes, that isn't powered by Linux or in some way affected by it.
[ Find out more about the growth of Linux, its sponsors, and the rising demand for Linux skills with Linux by the numbers: Contributions, jobs, adoption. | Prove your expertise with the free OS in InfoWorld's Linux admin IQ test round 1 and round 2. | Track the latest trends in open source with InfoWorld's Open Sources blog and Technology: Open Source newsletter. ]


TITLE 











 But where from here? If Linux acceptance and development are peaking, where does Linux go from up? Because Linux is such a mutable phenomenon and appears in so many incarnations, there may not be any single answer to that question.
More important, perhaps, is how Linux -- the perennial upstart -- will embrace the challenges of being a mature and, in many areas, market-leading project. Here's a look at the future of Linux: as raw material, as the product of community and corporate contributions, and as the target of any number of challenges to its ethos, technical prowess, and growth.




Linux: Bend it, shape it, any way you want it
If there's one adjective that sums up a significant source of Linux's power, it's "malleable." Linux is raw material that can be cut, stitched, and tailored to fit most any number of scenarios, from tiny embedded devices to massively parallel supercomputers.
That's also been one of Linux's shortcomings. Its protean nature means users rarely use "Linux" -- instead, they use a Linux-based product such as Android, or a hardware device built with a Linux base such as an in-home router. Desktop Linux's multiple (and often incompatible) incarnations winnow out all but the most devoted users.
"How end-users experience Linux is definitely fragmented," admits Jim Zemlin, executive director of the Linux Foundation. "But that's one of the powers of Linux.

"It's a building block that has allowed Google to build Android and Chromebooks, Amazon to build the Kindle, Canonical to build Ubuntu, and much more. All of those experiences are different for the user, but there is choice for the consumer."
Mark Baker, Ubuntu Server product manager for Canonical, which leads the Ubuntu project, puts it in almost exactly those words: "Open source delivers freedom of choice." Open source naturally encourages modularity, he says, so "with open source you can choose the best components for your situation," whether you're a user working on a home machine or a systems architect developing a data center.

But Al Gillen, program vice president for system software and an analyst at IDC specializing in operating environments, questions the value proposition of such total freedom going forward. "Linux is open source, and as such, anybody can fork off code and turn it into something else. However, the industry has shown that forks without value go away, and there is great value associated with staying close to main line code."
Android users have experienced this most directly with the fragmentation that exists between different editions of the OS. None of that is, strictly speaking, Linux's fault, but as with the myriad desktop distributions before it, Android fragmentation illustrates the tension that arises between allowing the freedom to change the product and the fallout of inconsistency of implementation.
Ironically, that might mean the best thing for Linux, going forward, is to double down on Linux as raw material.

Eric Sammer, engineering manager at Cloudera, doesn't see Linux alone as having users "the same way as something like Firefox or the Apache Web server." Linux "is targeted toward operating system builders, not the end-user," and so it needs "tons of other software -- much of it tightly coupled, from a user's perspective (such as a boot loader) -- to form a complete system." As Torvalds himself noted in the release notes for the very first Linux kernel, "A kernel by itself gets you nowhere."
Both Gillen's and Sammer's words are echoed by how Linux's biggest uptake with users has been, again, Android, with all its attendant value added by Google and the app ecosystem developed for the OS. The malleability of Linux is only a first step toward an actual product -- as its most successful advocates understand.

Corporate contributors: Asset or obstacle?
Another of Linux's hallmarks is that it's a collaborative effort; out of the contributions of many come one. But where are those collaborators coming from?
Answer: Corporations -- mainly, those who stand to benefit themselves from supporting Linux for their own future endeavors. Aside from Red Hat (apart from Canonical, the most widely recognized corporate vendor of Linux solutions), top contributors include Intel, IBM, Texas Instruments, and even Microsoft.
Much of Linux's flexibility is due to such contributions, which expand Linux's ability to run on multiple platforms and on a broad spectrum of devices. Enlightened self-interest is the main motive here: Microsoft's own kernel additions, for instance, largely revolve around allowing Linux to run well under Hyper-V.

Sammer believes the prevalence of corporate-backed contributors is "due to the barrier of entry to any project as complex and critical as the Linux kernel. Your average C hacker doesn't have the time to get up to speed, build the credibility with the community, and contribute meaningful patches in their spare time, without significant backing." In his view, corporations most often have the resources to support such endeavors, with universities and research organizations being further behind.
But has the prevalence of corporate contribution to Linux turned the OS into a mere corporate plaything? Is that Linux's future, to be a toy of the monoliths?
What matters most is not who's contributing, but in what spirit. Linux advocates are firm believers in contributions to Linux, no matter what the source, as a net gain -- as long as the gains are contributed back to the community as a whole.

Mark Coggin, senior director of product marketing for Red Hat Enterprise Linux, believes "the best innovations are those that are leveraged, and improved by the greatest number of participants in the open source community."
"We put all of our innovations into open source projects, and seek to gain acceptance by those upstream groups before we incorporate them into our supported products like Red Hat Enterprise Linux. We hope that everyone who works to enhance the Linux kernel and the userspace projects also takes a view like ours," Coggin says.
It's also not widely believed that corporate contributions are a form of "hijacking Linux," as Gillen puts it -- a way to make Linux "less applicable to other major user contingents." He's convinced commercial support for Linux and commercial enhancements to Linux "are an asset to the Linux development paradigm; not a negative."
Likewise, to Zemlin, Linux development "is not a zero-sum game."


"What one developer does in the mobile space to improve power consumption can benefit a developer working in the data center who needs to ensure their servers are running efficiently," says Zemlin. "That shared development is what makes Linux so powerful."
Corporate contributions are not the enemy to him, either: "Having people paid to work on Linux has never been a bad thing; it has allowed it to be iterated upon quickly and innovation to be accelerated."
The real issues, as Baker notes, come when "some very large Web companies make some changes available and push them upstream, but decide to keep others in-house to give them an advantage."
Version 3 of the GPL -- the license Linux was released under in an earlier version -- was developed in part as a response to such behaviors. However, it only prevents taking code others have written and redeploying it as a Web service. There's no inherent (or legal) way to prevent code developed in-house from being kept in-house -- which might well simply be part of the ongoing social cost of offering Linux freely to the world.

The biggest threats to Linux
If corporate co-opting is less likely than ever, thanks to the mechanisms that keep Linux an open project, what real threats does it face?
Nobody takes very seriously the idea that Linux is about to be wiped off the map by a rogue patent threat or lawsuit. One of the biggest such legal attacks, SCO Group's lawsuit against IBM, widely construed as a proxy attack on Linux, failed miserably.
Coggin is of this mindset: "Linux's huge success, with a vast network of developers and widespread global adoption, means that it is highly resilient. Although patent threats arise from time to time, as they do with many technologies, it seems unlikely that a patent or combination of patents could pose an existential threat to Linux."

Plus, competition in the form of other closed source products, or even those with more liberal licensing (such as the various BSDs), hasn't really materialized to the degree that Linux runs the risk of being pushed aside.
Sammer sums up the biggest legitimate threat to Linux in a single word: complacency -- the complacency that goes with becoming a market leader in any field.
"If you're vying for first place," he says, "you're usually more open to change of process, of mindset, of road map, of status quo, whatever. I can't help but think of Firefox losing so much to Chrome so fast, or the commercial Unixes losing to Linux, or all the other examples of such things."
In roughly the same vein, Zemlin sees a threat in the form of a lack of experienced Linux talent to support the demand; hence the Linux Training program.

Gillen sees a threat coming from a transition that "over time, moves the majority of the Linux user community from the enterprise customer over to service providers."
Such a move would put Linux users at the mercy of people who may consume Linux and provide it as a service but don't return their innovations to the community as a whole. It may take a decade or more for such a shift to happen, but it could have "negative implications for Linux overall, and to commercial vendors that sell Linux-based solutions."
Another possible threat to Linux is corporate co-opting -- not of the code itself, but of the possibilities it provides. Baker is worried about the rise of mobile devices, many of which, although powered by Linux, are powered all the more by corporate concerns


"That's why we need alternatives like Ubuntu and Firefox," says Baker, "to provide real alternatives for those who do not want their experience of the Internet to be determined by Apple or Google."
Of those two, Google -- by way of Android -- is the main offender in this accusation. Many of the arguments against Android revolve around it being a Linux-powered OS that's little more than a portal to Google's view of the world, and thus isn't true to the spirit of Linux.
In short, the biggest threats to Linux may well be from within -- unintended by-products of the very things that make it most attractive in the first place. Its inherent mutability and malleability has so far given it an advantage over complacency and co-opting, but it isn't clear that will always be true.

Where from here?
Linux is unquestionably here to stay, and in more than one form. But how it will do that and at what cost are up for debate.
The most obvious future path for Linux is where it becomes that much more of a substrate for other things -- a way to create infrastructure -- and where it becomes that much less a product unto itself in any form. The real innovation doesn't just come from deploying Linux, but deploying it as a way to find creative solutions to problems, by delivering it in such a way that few people are forced to deal with Linux as such, and by staying a step ahead of having it put behind technological bars.
Coggin puts it this way: "Linux is emerging beyond that of a packaged or flexible operating system to become more of an infrastructure platform. With this, we see developers and architects using Linux to build next-generation solutions, and creating next-generation enterprise architectures." Much of this work is already under way, he claims, in "cloud, big data, mobile, and social networks."

Gillen, too, agrees that Linux "is going to be a very key part of public cloud infrastructure, and as such, it has ensured itself a long-term role in the industry."
"Linux already runs the cloud, of that there is no doubt," says Baker. "It needs to maintain its position as the platform for scale-out computing -- this means staying ahead of new technologies like ARM server chips and hyperscale, software-defined networking, and the overall software-defined data center." Such work ought to complement other ongoing efforts to create open system hardware designs, such as the Open Compute Project's.

 One possible downside of Linux becoming an ubiquitous infrastructure element is it becoming as institutionalized as the commercial, closed source Unixes it has displaced. But Zemlin thinks Linux's very mutability works in its favor here: "If you would have asked Linus Torvalds or other members of the community a decade ago if Linux would power more mobile phones than any other platform, they certainly wouldn't have expected that. We'd rather just watch where it goes and not try to forecast since we most certainly will be wrong."

Another important future direction for some is, as mentioned above, "go[ing] mobile in a bigger way independently of Google," as Baker puts it. Projects like Mozilla's Firefox OS for phones are one incarnation of this, although it's unclear how much of a dent such a thing will make in Google's existing, and colossal, market share for Android.
Lastly, and most crucially, there's the question of who will be responsible for ushering Linux into its own future. While Linux can be forked and its development undertaken by others, history's shown that having a single core development team for Linux -- and equally consistent core teams for projects based on it -- is best.
That puts all the more burden on the core team to keep Linux moving forward in ways that complement its existing and future use cases, and not to protect it -- perhaps futilely -- from becoming something it might well be in its best interests to transform into.
If Linux's future really is everywhere, it might well also be in a form that no one now can conceive of -- and that's a good thing.
Infoworld.com

















What is open source?

The term "open source" refers to something that can be modified because its design is publicly accessible.
While it originated in the context of computer software development, today the term "open source" designates a set of values—what we call the open source way. Open source projects, products, or initiatives are those that embrace and celebrate open exchange, collaborative participation, rapid prototyping, transparency, meritocracy, and community development.


What is open source software?

Open source software is software whose source code is available for modification or enhancement by anyone.
"Source code" is the part of software that most computer users don't ever see; it's the code computer programmers can manipulate to change how a piece of software—a "program" or "application"—works. Programmers who have access to a computer program's source code can improve that program by adding features to it or fixing parts that don't always work correctly.


What's the difference between open source software and other types of software?

Some software has source code that cannot be modified by anyone but the person, team, or organization who created it and maintains exclusive control over it. This kind of software is frequently called "proprietary software" or "closed source" software, because its source code is the property of its original authors, who are the only ones legally allowed to copy or modify it. Microsoft Word and Adobe Photoshop are examples of proprietary software. In order to use proprietary software, computer users must agree (usually by signing a license displayed the first time they run this software) that they will not do anything with the software that the software's authors have not expressly permitted.

Open source software is different. Its authors make its source code available to others who would like to view that code, copy it, learn from it, alter it, or share it. LibreOffice and the GNU Image Manipulation Program are examples of open source software. As they do with proprietary software, users must accept the terms of a license when they use open source software—but the legal terms of open source licenses differ dramatically from those of proprietary licenses. Open source software licenses promote collaboration and sharing because they allow other people to make modifications to source code and incorporate those changes into their own projects. Some open source licenses ensure that anyone who alters and then shares a program with others must also share that program's source code without charging a licensing fee for it. In other words, computer programmers can access, view, and modify open source software whenever they like—as long as they let others do the same when they share their work. In fact, they could be violating the terms of some open source licenses if they don't do this.

So as the Open Source Initiative explains, "open source doesn't just mean access to the source code." It means that anyone should be able to modify the source code to suit his or her needs, and that no one should prevent others from doing the same. The Initiative's definition of "open source" contains several other important provisions.

Is open source software only important to computer programmers?

Open source software benefits programmers and non-programmers alike. In fact, because much of the Internet itself is built on many open source technologies—like the Linux operating system and the Apache Web server application—anyone using the Internet benefits from open source software. Every time computer users view webpages, check email, chat with friends, stream music online, or play multiplayer video games, their computers, mobile phones, or gaming consoles connect to a global network of computers that routes and transmits their data to the "local" devices they have in front of them.
The computers that do all this important work are typically located in faraway places that users don't see or can't physically access—which is why some people call these computers "remote computers." More and more, people rely on remote computers when doing things they might otherwise do on their local devices. For example, they use online word processing, email management, and image editing software that they don't install and run on their personal computers. Instead, they simply access these programs on remote computers by using a Web browser or mobile phone application.

Some people call remote computing "cloud computing," because it involves activities (like storing files, sharing photos, or watching videos) that incorporate not only local devices, but also the global network of remote computers that form an "atmosphere" around them. Cloud computing is an increasingly important aspect of everyday life with Internet-connected devices. Some cloud computing applications, like Google Docs, are closed source programs. Others, like Etherpad, are open source programs.
Cloud computing applications run "on top" of additional software that helps them operate smoothly and effectively. The software that runs "underneath" cloud computing applications acts as a platform for those applications. Cloud computing platforms can be open source or closed source. OpenStack is an example of an open source cloud computing platform.

Why do people prefer using open source software?

Many people prefer open source software because they have more control over that kind of software. They can examine the code to make sure it's not doing anything they don't want it to do, and they can change parts of it they don't like. Users who aren't programmers also benefit from open source software, because they can use this software for any purpose they wish—not merely the way someone else thinks they should.

Others like open source software because it helps them become better programmers. Because open source code is publicly accessible, students can learn to make better software by studying what others have written. They can also share their work with others, inviting comment and critique.
Some people prefer open source software because they consider it more secure and stable than proprietary software. Because anyone can view and modify open source software, someone might spot and correct errors or omissions that a program's original authors might have missed. And because so many programmers can work on a piece of open source software without asking for permission from original authors, open source software is generally fixed, updated, and upgraded quickly.
Many users prefer open source software to proprietary software for important, long-term projects. Because the source code for open source software is distributed publicly, users that rely on software for critical tasks can be sure their tools won't disappear or fall into disrepair if their original creators stop working on them.

Doesn't "open source" just mean something is free of charge?

No. This is a common misconception about what "open source" implies. Programmers can charge money for the open source software they create or to which they contribute. But because most open source licenses require them to release their source code when they sell software to others, many open source software programmers find that charging users money for software services and support (rather than for the software itself) is more lucrative. This way, their software remains free of charge and they make money helping others install, use, and troubleshoot it.

What is open source "beyond software"?

At opensource.com, we like to say that we're interested in the ways open source can be applied to the world beyond software. We like to think of open source as not only a way to develop and license computer software, but also an attitude. Approaching all aspects of life "the open source way" means expressing a willingness to share, collaborating with others in ways that are transparent (so that others can watch and join too), embracing failure as a means of improving, and expecting—even encouraging—everyone else to do the same.

It means committing to playing an active role in improving the world, which is possible only when everyone has access to the way that world is designed. The world is full of "source code"—blueprints, recipes, rules—that guide and shape the way we think and act in it. We believe this underlying code (whatever its form) should be open, accessible, and shared—so many people can have a hand in altering it for the better.
Here, we tell stories about what happens when open source values are applied to business, education, government, health, law, and any other area of life. We're a community committed to telling others how the open source way is the best way—because a love of open source is just like anything else: it's better when it's shared.

Where can I learn more about open source?

We recommend visiting the Opensource.com resources page.

Opensource.com







10 amazingly alternative operating systems and what they could mean for the future

This post is about the desktop operating systems that fly under the radar of most people. We are definitely not talking about Windows, Mac OS X or Linux, or even BSD or Solaris. There are much less mainstream options out there for the OS-curious.
These alternative operating systems are usually developed either by enthusiasts or small companies (or both), and there are more of them than you might expect. There are even more than we have included in this article, though we think this is a good selection of the more interesting ones and we have focused specifically on desktop operating systems.
As you will see, many of them are very different from what you may be used to. We will discuss the potential of this in the conclusion of this article.

Enough introduction, let’s get started! Here is a look at 10 alternative operating systems, starting with a familiar old name…


AmigaOS 4.1

This month (September 2008) AmigaOS 4.1 was released. Although AmigaOS is a veteran in the field (many have fond memories of the original Amiga computer), its current version is a fully modern OS.
AmigaOS only runs on specific PowerPC-based hardware platforms. The company ACube is currently marketing and distributing AmigaOS and is going to bundle the OS with their motherboards.



Source model: Closed source
License: Proprietary
Platform: PowerPC
State: Final
Read a review of AmigaOS 4.1 at Arstechnica.


Haiku

Haiku is an open source project aimed at recreating and continuing the development of the BeOS operating system (which Palm Inc. bought and then discontinued). Haiku was initially known as OpenBeOS but changed its name in 2004.
Haiku is compatible with software written for BeOS.








Source model: Free and open source
License: MIT License
Platform: x86 and PowerPC
State: Pre-Alpha
Read more at the Haiku website.


ReactOS

ReactOS is an operating system designed to be compatible with Microsoft Windows software. The project started in 1998 and today it can run many Windows programs well. The ReactOS kernel has been written from scratch but the OS makes use of Wine to be able to run Windows applications.





Source model: Free and open source
License: Various free software licenses
Platform: x86 (more under development)
State: Alpha
Read more at the ReactOS website.


Syllable Desktop

Syllable is a free and open source operating system that was forked in 2002 from AtheOS, an AmigaOS clone. It’s intended as a lightweight and fast OS suitable for home and small office users.





Source model: Free and open source
License: GNU General Public License
Platform: x86
State: Alpha
Read more at the Syllable website.


SkyOS

SkyOS is a closed source project written by Robert Szeleney and volunteers. It originally started as an experiment in OS design. It’s intended to be an easy-to-use desktop OS for average computer users. Well-known applications such as Firefox have been ported to run on SkyOS.




Source model: Closed source
License: Proprietary
Platform: x86
State: Beta
Read more at the SkyOS website.


MorphOS

MorphOS is a lightweight, media-centric OS build to run on PowerPC processors. It is inspired by AmigaOS and also includes emulation to be able to run Amiga applications.






Source model: Closed source
License: Mixed proprietary and open source
Platform: Pegasos, some Amiga models, EFIKA
Read more at the MorphOS website.


AROS Research Operating System

AROS is a lightweight open source OS designed to be compatible with AmigaOS 3.1 but also improve on it. The project was started in 1995 and can today be run on both PowerPC and IBM PC compatible hardware. It also includes an emulator that makes it possible to run old Amiga applications.







Source model: Open source
License: AROS Public License
Platform: x86 and PowerPC
Read more at the AROS website.


MenuetOS

MenuetOS, also known as MeOS, is an operating system written entirely in assembly language which makes it very small and fast. Even though it includes a graphical desktop, networking and many other features it still fits on a single 1.44 MB floppy disk (for our younger readers, that was the USB stick of the 80s and early 90s ;) ).



Source model: Open source (32-bit version), freeware (64-bit version)
License: Menuet License
Platform: x86
State: Beta
Read more at the MenuetOS website.


DexOS

DexOS is an open source operating system designed to work like the minimalistic ones on gaming consoles, but for PCs. Its user interface is inspired by video game consoles and the system itself is very small (supposedly this one also fits on a floppy disk, like MenuetOS) and the OS can be booted from several different devices. Its creators have tried to make it as fast as possible.





Source model: Free and open source
Platform: x86
Read more at the DexOS website.


Visopsys

Visopsys is a one-man hobby project by programmer Andy McLaughlin. The development began in 1997 and the OS is both open source and free. Visopsys stands for VISual Operating SYStem.




Source model: Open source
License: GPL
Platform: x86
State: Final
Read more at the Visopsys website.


The OS future

Even if none of these operating systems ever were to “make it” and become mainstream (and admittedly, some of them simply are not intended to be mainstream), the passion behind them is real, and many have the potential to introduce new and fresh ideas.
All this independent development can act as a kind of think tank, if you choose to look at it that way. It’s quite possible that concepts introduced by a niche OS will later be adopted by a larger player on the OS market.

 There are lots of interesting things happening today with the rise of virtualization and the “always online” nature of today’s computers that opens up incredibly interesting possibilities. For example, what we have read about Microsoft’s internal research OS Midori (the one that will retire Windows) sounds highly interesting.


Wherever the future operating systems may come from, be it from the already established players or some kind of newcomer, we are looking forward to seeing what the future has in store for us. We suspect that there is a significant “jump” in the evolution coming up just around the corner.
Who knows, a couple of years from now maybe all the computers here in the Pingdom office will be running the UltraMagicalSuperVirtualOS version 1.2?
Royal.pingdom.com

Monday 25 August 2014

Drones are Fun, Until One Hits You In the Face


drones, warnings dangers

Mini drones are not yet appearing in our skies on a daily basis but they certainly are a rapidly growing trend. People can and do get hurt so we really need to help amateur pilots learn how to fly their new toys safely.
There are all kinds of exciting developments happening in this field and hobbyists are now able to pick up a device for relatively little money. But as more and more of these devices come onto the market, more grizzly images are popping up online to show what happens when people lose control.

My team and I have been experimenting with drones for some months, flying them over hard-to-reach heritage sites but, as our experience of deploying these platforms has increased, we have become increasingly concerned about how they can be used safely.
Many of the promises about what we will use drones for in the near future are just flights of fancy, especially given the limited payload capabilities of most of commercial off-the-shelf products, but the technology is certainly evolving fast.

Out of control?
The main limiting factor for a drone is often the untrained pilot at its helm. There have certainly been incidents in the recent past which demonstrate the consequences of human error, system failure, flying in inappropriate weather conditions and sheer incompetence.
Earlier this year, a resident of Barrow-in-Furness was prosecuted for flying a radio-controlled aircraft in restricted airspace over BAE Systems’ nuclear submarine facility. A similar incident occurred in May when a man was was questioned by the FBI for crashing a camera-equipped sUAV close to the Bridgeport Harbour Electricity Generating Station in Conneticut. No one was injured in either case but both represented pretty serious incidents in terms of flying within restricted airspace.

 There have been numerous reports of injuries, even fatalities, caused by loss of control of an sUAV and some pretty harrowing images can easily be found online. Reported cases include a bridegroom who was struck in the face by a quadcopter flown by an adventurous wedding photographer, and the crashing of a hexacopter into the grandstand at Virginia Motorsports Park, injuring five. The second half of 2013 was particularly bad, with fatalities in Texas, Korea and Brazil and the horrendous case of a 19-year-old who died instantly in a Brooklyn park when the blades of his radio controlled aircraft struck his head and neck.


Of course, there are rules and guidelines about drones. But when the Civil Aviation Authority tries to address the human factor in its guidelines, its efforts fall rather flat. Its warnings are far too generic and rigid to cope with a rapidly changing technological scene. It discusses the dangers of remote data feedback and stresses that it’s important for pilots to remain situationally aware, but the style of wording and absence of illustrated examples do little to emphasise the very real dangers of flying when non-expert pilots are in charge of a drone.
Having now experimented with a number of drones at historical sites, often in remote moorland or coastal regions, I find myself in strong agreement with those who call for regulations to be strengthened. Indeed, we have recently produced our own Standard Operating Procedures in an attempt to fill in the gaps evident in existing guidance.

drones, warnings dangers 

Over the past 12 months alone, our hexacopter has evolved from a 2kg to 4kg payload capacity. Our students have, as a result, been able to experiment with different tools for the drones. One has been a camera that feeds video back to a headed mounted display worn by the pilot on the ground. This certainly generates impressive video but even our experienced pilot has been “drawn in” by the stunning picture quality, only to be alerted – just in time – to the appearance of the sUAV propeller blades in his field of view as the vehicle becomes progressively unstable.


Flying school

It is possible to take training courses for flying drones but these often tend to be aimed at those wanting to use sUAVs for professional or commercial work, as opposed to the hobbyist or academic researcher. It is clear that, as the stories of injuries, fatalities, property damage, invasion of privacy, trespass and airspace incursion multiply, the situation has to change, even though any change will, without doubt, be very unpopular with many hobbyists and retailers.


Some developers have started working on firmware modifications that would help pilots stop their drones from inadvertently flying into restricted airspace. This is a promising development, even if it might not be met with great enthusiasm from users, but much more needs to be done. Every drone sold needs to be registered and marked (perhaps even chipped) in some way so that it can be traced directly back to its owner if something goes wrong. There is no doubt that this will be difficult to enforce, especially as 3D printing technologies are increasingly being used to manufacture replacement components.
Drone pilots also need to submit to some form of basic competence assessment, such as through an app, and should then be granted a licence. Such an assessment could take the form of a test similar to those used to teach driving students about decision-making and observational skills.

sUAV simulator packages already exist, as do apps, but most fall short of teaching and evaluating the essential skills and awareness necessary for safe sUAV operations. We should focus on human-centred design issues from the start so that simulators can test reaction times, decision-making for flights in urban or sensitive areas, pilot distraction effects and other potential problems a pilot might face.
Of course, such tests would not replace professional courses but they would help to ensure that the growing number of pilots can fly their devices safely. Some might complain that these measures are over the top and totally unnecessary for something that is, after all, just a smaller version of a radio controlled helicopter. But just type “quadcopter injuries” into Google images and make your own mind up
Livescience.com