Wednesday, March 30, 2016

Android Intent Is Like Asynchronous API Call

Android Intent Is Like Asynchronous API Call

What is a Intent ?

Intent is basically a message that is passed between components (such as ActivitiesServices, Broadcast Receivers, and Content Providers). So, it is almost equivalent to parameters passed to API calls. The fundamental differences between API calls and intents’ way of invoking components are:
  • API calls are synchronous while intent-based invocations are asynchronous.
  • API calls are compile time binding while intent-based calls are run-time binding.
Of course, Intents can be made to work exactly like API calls by using what are called explicit intents, which will be explained later. But more often than not, implicit intents are the way to go and that is what is explained here.
One component that wants to invoke another has to only express its’ intent to do a job. And any other component that exists and has claimed that it can do such a job through intent-filters, is invoked by the android platform to accomplish the job. This means, both the components are not aware of each other’s existence and can still work together to give the desired result for the end-user.
This invisible connection between components is achieved through the combination of intents, intent-filters and the android platform.
This leads to huge possibilities like:
  • Mix and match or rather plug and play of components at runtime.
  • Replacing the inbuilt android applications with custom developed applications.
  • Component level reuse within and across applications.
  • Service orientation to the most granular level, if I may say.
  • Here is additional description about intent, almost formal.
An intent is an abstract description of an operation to be performed. It can be used with startActivityto launch an Activity, broadcastIntent to send it to any interested BroadcastReceiver components, and startService(Intent) or bindService(Intent, ServiceConnection, int) to communicate with a background Service.
An Intent provides a facility for performing late runtime binding between the code in different applications. Its most significant use is in the launching of activities, where it can be thought of as the glue between activities. It is basically a passive data structure holding an abstract description of an action to be performed. The primary pieces of information in an intent are:
  • action The general action to be performed, such as ACTION_VIEW, ACTION_EDIT, ACTION_MAIN, etc.
  • data The data to operate on, such as a person record in the contacts database, expressed as a Uri.
On this data structure is that the android is implemented as you read the following documentation is very helpful:

Other answers: 

Intents are get widely used in android to switch from one activity to other . it is good practice to use intents . Using intents we can pass/send values from one activity to another. So it can be used as value passing mechanism. Also its syntax is very why to think about threads ?

Intents are asynchronous messages which allow application components to request functionality from other Android components. Intents allow you to interact with components from the own and other applications. For example an activity can start an external activity for taking a picture.

Intents are objects of the android.content.Intent type. Your code can send them to the Android system defining the components you are targeting. For example via the startActivity() method you can define that the intent should be used to start an activity. An intent can contain data via a Bundle. This data can be used by the receiving component.

To start an activity use the method startActivity(intent). This method is defined on the Context object which Activity extends.

Intel x86 Computer Processor Architecture

Intel x86 Computer Processor Architecture
... Intel does everything well  Intel x86 made its mass market debut in 1982 in the IBM PC, known as the Intel 8088. That’s over 30 years’ ago. Since then, Intel has made significant enhancements to the instruction set architecture to keep it relevant in a market where 18 months is obsolescence. They moved from 16-bit to 32-bit and now 64-bit. They have added hardware accelerators on the chip, and they have even built Risc instruction into the architecture to speed up some of the Cics instructions that plagued performance. Intel x86 runs in everything from supercomputers to servers to desktops and laptops. Intel does everything well. Some would say “good enough”. [survived iAPX432 and Itanium designed to replace it]

vs IBM power: mission-critical applications.  The current IBM Power architecture design began in 1997 and the processor was announced in 2001 as the Power4. It was the first multicore processor in the industry. Comparing the x86 and Power processors on a micro-benchmark level will show little raw performance advantages for either. Comparing the two using enterprise workloads will demonstrate a significant advantage for Power in data workloads such as databases, data warehouses, data transaction processing, data encryption/compression, and certainly in high-performance computing, which most in business think of as analytics.

IBM fielded the Power4 processor back in October 2001, which was the first RISC/Unix processor to have two cores and to break the 1 GHz clock speed barrier

Tuesday, March 29, 2016

List of Worst Failed Computer Projects

Failed Computer Projects

Ada computer language

. NeXT Computer (1988): Based on the Motorola’s new 25MHz 68030 CPU and including 8MB-64MB of RAM, a 330MB hard drive and an 1120x832 grayscale display, Steve Jobs’ NeXT station cost $10,000 a pop. It was inaccessible to most and didn’t sell very well. Despite its limited commercial success, NeXT played a pivotal role in history. 

The 16 Worst Failed Computers of All Time -...
14. Commodore Plus/4 (1984): Commodore released like 2,000 computers in about 5 years time. That’s baffling. The Plus/4 was a home computer with a built-in

12. IBM PS/2 (1987): How ya’ gonna’ do it? PS/2 It! Or not. The Personal System/2 was IBM’s failed attempt to regain control of the clone market via a closed, proprietary architecture.

HP3000 (first ship) was failure. Did not take into account moving data from memory into stack registers

HP 300 (wikipedia)  The HP 300 "Amigo" was a computer produced by Hewlett Packard (HP) in the late 1970s based loosely on the stack-based HP 3000, but with virtual memory for both code and data. It introduced built-in networking, automatic spelling correction, multiple windows (on a character based screen), and labels adjacent to vertically stacked user function keys, now used on ATMs and gas pumps. The HP300 featured HP-IB (later IEEE-488) interface (IF) as the I/O bus, an 8" floppy disk, and a built-in fixed 12M hard drive later common on PCs. The HP300 was cut-short from being a commercial success despite the huge engineering effort, which included HP-developed and -manufactured silicon on sapphire (SOS) processor and I/O chips. HP Computer Systems Division General Manager (GM), Doug Spreng, decided the file system differences between the division's money making HP3000 line and the burgeoning HP300 would keep the HP300 from being successful and killed the product. HP built two semi-truck loads of units before shutting down the HP300 production line to meet customer contractual agreements

iapx 432 final project The iAPX 432 was a flop and was discontinued only four years after its release. Speed was the main reason that the processor failed, although many programmers did not see Ada as the way of the future and therefore ignored the chip. The reason the 432 was so slow was that it verified many memory accesses (causing a memory read), its instructions were not aligned and took a while to decode, it did not have a large enough cache, it did not have enough registers, and it had extra chips which had to communicate. The success of the 80286 sealed its fate; as mentioned above it was four times faster than the 432. The iAPX 432 taught Intel a lot about what could and could not be done, and it was impressive that such a complex a system could be created with the available technology, but it was too complicated for practical use. Tandem rejects iAPX paper: why is it 4x slower than 8086? 50% of compiler instructions unneccesary dvorak Once released the chip proved to be a woofing dog the designers gave up on it as a product and moved forward with some of the ideas used to design the chip. It’s believed that it was given up on after 1984 although supplies of the chipset may have still been available as late as 1993. The ideas behind the chip continued and slowly evolved into what is today’s Intel 960 embedded processor Everything changed back once the 432 hit the market and was determined to be a dog. The 432 was simply too ambitious an undertaking. Posted on July 18, 2008
our many failures go largely unstudied — and the rich veins of wisdom that these failures generate live on only in oral tradition passed down by the perps (occasionally) and the victims (more often).

A counterexample to this — and one of my favorite systems papers of all time — is Robert Colwell‘s brilliant Performance Effects of Architectural Complexity in the Intel 432. This paper, which dissects the abysmal performance of Intel’s infamous 432, practically drips with wisdom, and is just as relevant today as it was when the paper was originally published nearly twenty years ago.

For those who have never heard of the Intel 432, it was a microprocessor conceived of in the mid-1970s to be the dawn of a new era in computing, incorporating many of the latest notions of the day. But despite its lofty ambitions, the 432 was an unmitigated disaster both from an engineering perspective (the performance was absolutely atrocious) and from a commercial perspective (it did not sell — a fact presumably not unrelated to its terrible performance). To add insult to injury, the 432 became a sort of punching bag for researchers, becoming, as Colwell described, “the favorite target for whatever point a researcher wanted to make.”

But as Colwell et al. reveal, the truth behind the 432 is a little more complicated than trendy ideas gone awry; the microprocessor suffered from not only untested ideas, but also terrible execution. For example, one of the core ideas of the 432 is that it was a capability-based system, implemented with a rich hardware-based object model. This model had many ramifications for the hardware, but it also introduced a dangerous dependency on software: the hardware was implicitly dependent on system software (namely, the compiler) for efficient management of protected object contexts (“environments” in 432 parlance). As it happened, the needed compiler work was not done, and the Ada compiler as delivered was pessimal: every function was implemented in its own environment, meaning that every function was in its own context, and that every function call was therefore a context switch!. As Colwell explains, this software failing was the greatest single inhibitor to performance, costing some 25-35 percent on the benchmarks that he examined.

If the story ended there, the tale of the 432 would be plenty instructive — but the story takes another series of interesting twists: because the object model consumed a bunch of chip real estate (and presumably a proportional amount of brain power and department budget), other (more traditional) microprocessor features were either pruned or eliminated. The mortally wounded features included a data cache (!), an instruction cache (!!) and registers (!!!). Yes, you read correctly: this machine had no data cache, no instruction cache and no registers — it was exclusively memory-memory. And if that weren’t enough to assure awful performance: despite having 200 instructions (and about a zillion addressing modes), the 432 had no notion of immediate values other than 0 or 1. Stunningly, Intel designers believed that 0 and 1 “would cover nearly all the need for constants”, a conclusion that Colwell (generously) describes as “almost certainly in error.” The upshot of these decisions is that you have more code (because you have no immediates) accessing more memory (because you have no registers) that is dog-slow (because you have no data cache) that itself is not cached (because you have no instruction cache). Yee haw!

Monday, March 28, 2016

Enable Android Tablet or Phone for Debugging

Enable Android Tablet or Phone for Debugging

Enable Developer Options

Settings - About Tablet - Software Version
Hit Build number 10 times until it says you are a developer

Settings - System - Developer Options

Warning - OK

USB debugging [x]

Find USB driver for the device, and install it

USB composite device
LG G Pad F 7.0
ADB interface

Tablet - allow usb debugging YES

Scitools Understand

Scitools Understand

Source Code Analyzers as a Development Tool
May 24, 2013 | Richard Brett

It is difficult to write consistent and high quality code when using libraries/sdks from multiple sources and when development is distributed between several teams and multiple time zones.  Many challenges exist for both new and experienced developers including lack of documentation, insufficient unit test coverage and nuances to each platform/sdk that make things different. It becomes necessary for developers of one platform to understand complicated legacy code of an unfamiliar platform. To make things more complex, it may be written in a language they do not understand well.  It is estimated that upto 60-80% of programmer’s time is spent to maintain a system and 50% of that maintenance effort is spent understanding that program (

It is helpful for developers to have  tools that can analyze different codebases quickly. A rather comprehensive list of source code analyzer tools for each platform is listed here: ( .  Since an in-depth comparison of the multitude of analyzer tools is beyond the scope of this article, all figures and analysis were done by using Understand by Scitools, a typical albeit premium source code analysis tool.

Understand by SciTools

Understand by SciTools has the ability to scan Ada, Cobol, C/C++, C#, Fortran, Objective-C, Java, Jovial, Pascal, PL/M, Python and others. Like many multi-language analyzer programs it is not free, however, the benefits of such a program are enormous. (For the purposes of this demonstration, a deprecated and unused codebase was analyzed.)....

(continues at link)

Thursday, March 24, 2016

Mocking in Test Driven Development

Some definitions:

Mocking the Embedded World:
Test-Driven Development, Continuous Integration, and Design Patterns
Michael Karlesky, Greg Williams, William Bereza, Matt Fletcher
Atomic Object,

A Note on "Mocking" and This Article’s Title
Mocking in software development is a specific practice that complements unit testing (in particular, interaction-based testing). The majority of a system’s code consists of calls making calls to other parts of the codebase. A mock is a specialized substitution for any part of the system with which the code under test interacts.

The mock not only mimics the function call interface of the system code outside the code under test it also provides the means to capture the parameters of function calls made upon it, record the order of calls made, and provide any function return value a programmer requires for testing scenarios. With mocks we can thoroughly test all of the logic within a function and verify that this code makes calls to the rest of the system as expected. Mocking is covered in more depth later.

Automated unit testing is far more prevalent in high-level software systems than in embedded systems though certainly even here it is not widespread. To our knowledge, automatically generating and unit testing with mocks in embedded software (particularly in small systems and those using C) such as we have done is a new development in the embedded space. This article’s title is a play on the uniqueness of the mocking concept to embedded software and a reaction to those in the industry that may say practices such as TDD are impossible to implement or have no value in embedded software development.

Wikipedia Test-driven development 

Fakes, mocks and integration tests
Unit tests are so named because they each test one unit of code. A complex module may have a thousand unit tests and a simple module may have only ten. The tests used for TDD should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Introducing dependencies on external modules or data also turns unit tests into integration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure.
When code under development relies on a database, a web service, or any other external process or service, enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable and more reusable code.[33] Two steps are necessary:
  1. Whenever external access is needed in the final design, an interface should be defined that describes the access available. See the dependency inversion principle for a discussion of the benefits of doing this regardless of TDD.
  2. The interface should be implemented in two ways, one of which really accesses the external process, and the other of which is a fake or mock. Fake objects need do little more than add a message such as “Person object saved” to a trace log, against which a test assertion can be run to verify correct behaviour. Mock objects differ in that they themselves contain test assertions that can make the test fail, for example, if the person's name and other data are not as expected.
Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so that error-handling routines can be developed and reliably tested. In a fault mode, a method may return an invalid, incomplete or null response, or may throw anexception. Fake services other than data stores may also be useful in TDD: A fake encryption service may not, in fact, encrypt the data passed; a fake random number service may always return 1. Fake or mock implementations are examples of dependency injection.
A Test Double is a test-specific capability that substitutes for a system capability, typically a class or function, that the UUT depends on. There are two times at which test doubles can be introduced into a system: link and execution. Link time substitution is when the test double is compiled into the load module, which is executed to validate testing. This approach is typically used when running in an environment other than the target environment that requires doubles for the hardware level code for compilation. The alternative to linker substitution is run-time substitution in which the real functionality is replaced during the execution of a test cases. This substitution is typically done through the reassignment of known function pointers or object replacement.
Test doubles are of a number of different types and varying complexities:
  • Dummy – A dummy is the simplest form of a test double. It facilitates linker time substitution by providing a default return value where required.
  • Stub – A stub adds simplistic logic to a dummy, providing different outputs.
  • Spy – A spy captures and makes available parameter and state information, publishing accessors to test code for private information allowing for more advanced state validation.
  • Mock – A mock is specified by an individual test case to validate test-specific behavior, checking parameter values and call sequencing.
  • Simulator – A simulator is a comprehensive component providing a higher-fidelity approximation of the target capability (the thing being doubled). A simulator typically requires significant additional development effort.[8]
A corollary of such dependency injection is that the actual database or other external-access code is never tested by the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven code with the "real" implementations of the interfaces discussed above. These are integration tests and are quite separate from the TDD unit tests. There are fewer of them, and they must be run less often than the unit tests. They can nonetheless be implemented using the same testing framework, such as xUnit.
Integration tests that alter any persistent store or database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques:
  • The TearDown method, which is integral to many test frameworks.
  • try...catch...finally exception handling structures where available.
  • Database transactions where a transaction atomically includes perhaps a write, a read and a matching delete operation.
  • Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run. This may be automated using a framework such as Ant or NAnt or a continuous integration system such as CruiseControl.
  • Initialising the database to a clean state before tests, rather than cleaning up after them. This may be relevant where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before detailed diagnosis can be performed.

Mockito - Official Site
A landing page for information about Mockito framework, a mocking framework for unit tests written in Java.

Mockito - Wikipedia, the free encyclopedia
Mockito is an open source testing framework for Java released under the MIT License. [1] [2] The framework allows the creation of test double objects (mock objects ...
Mockito is an open source testing framework for Java released under the MIT License.[1][2] The framework allows the creation of test double objects (mock objects) in automated unit tests for the purpose of Test-driven Development (TDD) or Behavior Driven Development (BDD).
In software development there is an opportunity of ensuring that objects perform the behaviors that are expected of them. One approach is to create a test automation framework that actually exercises each of those behaviors and verifies that it performs as expected, even after it is changed. However, the requirement to create an entire testing framework is often an onerous task that requires as much effort as writing the original objects that were supposed to be tested. For that reason, developers have created mock testing frameworks. These effectively fake some external dependencies so that the object being tested has a consistent interaction with its outside dependencies. Mockito intends to streamline the delivery of these external dependencies that are not subjects of the test. A study performed in 2013 on 10,000 GitHub projects found that Mockito is the 9th most popular Java library. [3]

Distinguishing features[edit]

Mockito distinguishes itself from other mocking frameworks by allowing developers to verify the behavior of the system under test (SUT) without establishing expectations beforehand.[4] One of the criticisms of mock objects is that there is a tight coupling of the test code to the system under test.[5] Since Mockito attempts to eliminate the expect-run-verify pattern[6] by removing the specification of expectations, the coupling is reduced or minimized. The result of this distinguishing feature is simpler test code that should be easier to read and modify. Mockito also provides some annotations useful for reducing boilerplate code.[7]


Szczepan Faber started the Mockito project after finding existing mock object frameworks too complex and difficult to work with. Faber began by expanding on the syntax and functionality of Easy Mock, but eventually rewriting most of Mockito.[8] Faber's goal was to create a new framework that was easier to work with and the Guardian project in London in early 2008.[9]


Mockito has a growing user-base[10][11] as well as finding use in other open source projects.[12]


Consider this decoupled Hello world program; we may unit test some of its parts, using mock objects for other parts.
package org.examples;


public class HelloApplication {

   public static interface Greeter {
      String getGreeting(String subject);
      String getIntroduction(String actor);
   public static class HelloGreeter implements Greeter {
      private String hello;
      private String segmenter;
      public HelloGreeter(String hello, String segmenter) {
         this.hello = hello;
         this.segmenter = segmenter;
      public String getGreeting(String subject) {
         return hello + " " + subject; 
      public String getIntroduction(String actor) {
         return actor+segmenter;
   public static interface HelloActable {
      void sayHello(String actor, String subject) throws IOException;
   public static class HelloAction implements HelloActable {
      private Greeter helloGreeter;
      private Appendable helloWriter;

      public HelloAction(Greeter helloGreeter, Appendable helloWriter) {
         this.helloGreeter = helloGreeter;
         this.helloWriter = helloWriter;
      public void sayHello(String actor, String subject) throws IOException { 

   public static void main(String... args) throws IOException {
      new HelloAction(new HelloGreeter("hello", ": "), System.out).sayHello("application", "world");

The result of HelloApplication launching will be the following:
application: hello world
Unit test for HelloActable component may look like this:
package org.examples;

import static org.mockito.Matchers.any;
import static org.mockito.Matchers.eq;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;

import org.junit.Before;
import org.junit.Test;

import org.examples.HelloApplication.HelloActable;
import org.examples.HelloApplication.HelloAction;
import org.examples.HelloApplication.Greeter;

public class HelloActionUnitTest {
   Greeter helloGreeterMock;
   Appendable helloWriterMock;
   HelloActable helloAction;
   public void setUp() {
      helloGreeterMock = mock(Greeter.class);
      helloWriterMock = mock(Appendable.class);
      helloAction = new HelloAction(helloGreeterMock, helloWriterMock);
   public void testSayHello() throws Exception {
      when(helloGreeterMock.getIntroduction(eq("unitTest"))).thenReturn("unitTest : ");
      when(helloGreeterMock.getGreeting(eq("world"))).thenReturn("hi world");
      helloAction.sayHello("unitTest", "world");

      verify(helloWriterMock, times(2)).append(any(String.class));
      verify(helloWriterMock, times(1)).append(eq("unitTest : "));
      verify(helloWriterMock, times(1)).append(eq("hi world"));
It uses mock objects for the Greeter and Appendable interfaces, and implicitly assumes the next use case:
unitTest : hi world
Integration test code for testing HelloActable wired together with Greeter may look like the following:
package org.examples;

import static org.mockito.Matchers.any;
import static org.mockito.Matchers.eq;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;

import org.junit.Before;
import org.junit.Test;

import org.examples.HelloApplication.HelloActable;
import org.examples.HelloApplication.HelloAction;
import org.examples.HelloApplication.Greeter;
import org.examples.HelloApplication.HelloGreeter;

public class HelloActionIntegrationTest {
   HelloActable helloAction;
   Greeter helloGreeter;
   Appendable helloWriterMock;
   public void setUp() {
      helloGreeter = new HelloGreeter("welcome", " says ");
      helloWriterMock = mock(Appendable.class);
      helloAction = new HelloAction(helloGreeter, helloWriterMock);
   public void testSayHello() throws Exception {

      helloAction.sayHello("integrationTest", "universe");  

      verify(helloWriterMock, times(2)).append(any(String.class));
      verify(helloWriterMock, times(1)).append(eq("integrationTest says "));
      verify(helloWriterMock, times(1)).append(eq("welcome universe"));
It uses mock objects only in place of Appendable interfaces, and uses the real implementations for other (HelloActable and Greeter) interfaces, and implicitly assumes the next use case:
integrationTest says welcome universe
As can be seen from the import statements of HelloActionUnitTest and HelloActionIntegrationTest classes, it is necessary to put some Mockito jars and JUnit jars in your class path to be able to compile and run the test classes.

See also[edit]


  1. Jump up^ "Mockito in six easy examples". 2009. Retrieved 2012-10-05.
  2. Jump up^ "What's the best mock framework for Java?". Retrieved 2010-12-29.
  3. Jump up^ Weiss, Tal (2013-11-26). "GitHub’s 10,000 most Popular Java Projects – Here are The Top Libraries They Use". Retrieved 2014-03-11.
  4. Jump up^ "Features and Motivations". Retrieved 2010-12-29.
  5. Jump up^ Fowler, Martin (2007). "Mocks Aren't Stubs". Retrieved 2010-12-29.
  6. Jump up^ Faber, Szczepan. "Death Wish". Retrieved 2010-12-29.
  7. Jump up^ Kaczanowski,, Tomek. "Mockito - Open Source Java Mocking Framework". Retrieved 2013-09-17.
  8. Jump up^ Faber, Szczepan. "Mockito". Retrieved 2010-12-29.
  9. Jump up^ "Mockito Home Page". Retrieved 2010-12-29.
  10. Jump up^,easymock,jmock
  11. Jump up^ "Mockito User Base". Retrieved 2010-12-29.
  12. Jump up^ "Mockito in Use". Retrieved 2010-12-29.
External links[edit]
Official website
mockito on GitHub
Mockito javadoc
Mockito in six easy examples
Java Mocking Frameworks

Mockito in six easy examples - Gojko Adzic
Mockito is a fantastic mock library for Java. I’m fascinated by how easy it is to use, compared to other things out there both in the Java and .NET world.

Unit tests with Mockito - Tutorial - vogella
Testing with Mockito This tutorial explains testing with the Mockito framework from within the Eclipse IDE.