There is no doubt that the emergence of new technologies coupled with developments in the mobile platforms have led to an increased number of Smartphones in the market. This has led to increased innovative capacities from application developers despite the difficulties in making choices concerning the best software platforms (Endre, 2009). Google Android, Nokia Symbian OS, Samsung Bada, Apple iOS, Blackberry OS, and Microsoft Mobile OS are some of the leading mobile operating systems available to developers. However, Apple iOS and Google Android are the two leading but competing mobile operating systems in the market. The aspect of superiority between the two mobile platforms is a subject of debate among many a developer. There is no doubt that each of the two has its own advantage and disadvantage when compared to the other.
Advantages of Android OS over iOS
First, the Android OS uses the Java programming language and due to its popularity, many software developers find it the most preferable language. When the Android OS is compared to the Apple iOSthat uses the Objective-C language, it becomes more superior because not only programmers are proficient with other programming languages. Second, the Android OS is an open source software that allows for the development of applications using third-party tools. Apple iOS does not give liberty to third-party tools to be used in the development of applications because it has restrictive developer guidelines. Such developer guidelines ensures that developers are limited to the fixed set of tools specified by Apple thereby denying them the ability to play around with increased functionality features of their applications. Such restrictions destroy the creative abilities of application developers.
Third, the Android OS is versatile in terms of enabling multitasking functionalities when performing multiple purposes. Users can perform several functionalities such as listening to the radio while writing text messages. In contrast, the Apple iOS allows for one functionality at a time. Fourth, the Android OS offers an ideal environment for testing applications because it offers an indexed set of tools that can be used by developers to test their applications before publishing it in the android market. The Xcode testing tool in the Apple iOS cannot match the specifications of this tool. Fifth, the Android OS requires low system requirements and as such, it can be installed on low productivity platforms without necessarily having to install drivers. Lastly, the Android software supports a variety of Social Integration and Google functionalities and apps.
Disadvantages of Android OS over iOS
First, the multitasking abilities makes users to develop complex applications that consumes a lot of time and resources master thereby increasing challenges to users and developers. On the other hand, developers under the Apple iOS are given a stable platform where specific tools and boundaries have been specified thereby making it easier for developers to proceed with their development processes. Secondly, applications running on the Android platform cannot Objective-C-based applications as compared to the devices running on the Apple iOS. Third, the android OS has been known to crash more frequently as compared to the Apple iOS. Fourth, Android OS takes longer than usual to load applications as compared to devices that run on the Apple iOS. Finally yet important, by virtue of it being an open source application, a likely of increased application attacks and threats thereby placing the devices at risk. Everybody is capable of creating applications thereby increasing the availability of applications in the android market.
There is no doubt that the introduction of the transformable-information construct introduced by Tim Burners-Lee two decades ago has brought immense developments in the World Wide Web. The Web is a techno-social system that has enabled human beings to enhance their communication, cognition, and cooperation over technological networks (Kane, and Hegarty, 2007). The aspects of the World Wide Web are defined and described based on technical specifications and non-proprietary standards. Most of these standards involve universally accepted practices that define the criteria for developing websites (Aghaei, Mohammad, and Farsani, 2012).
Web standards affect website development and administrations such as the accessibility, usability, and interoperability of web pages. Several technologies have introduced since the inception of the World Wide Web and such standards include Web 1.0 (web of cognition), web 2.0 (web of communication), web 3.0 (web of co-operation), and web 4.0 is a web of integration(Aghaei, Mohammad, and Farsani, 2012). Websites that adhere to web standards enables developers to understand languages used by other developers during coding. The W3C World Wide Web formed in 1994 is charged with the responsibility of creating and maintaining web standards by developing specifications, tools, software, and guidelines to guide the development of websites to reach their full potentials.
Refers to the first generation of the web characterized by cognition started as an avenue for enabling individuals and businesses to broadcast information to the people. It only allowed the process of searching information without giving enough interactions (read-only).
Commonly referred to as the read-and write technological platform, the web 2.0 standards enables users to collect and manage high volumes of individuals having similar interests relating to social interactions (Cormode, and Krishnamurthy, 2008). Dale Dougherty defined the web 2.0 technology in 2004 by presenting it as a business revolution in the computer industry and that the web 2.0 was aimed at building applications that harness network effects that will attract many users. With writing and reading capabilities, the web 2.0 technology enabled the web to be bi-directional such that users had increased controls as compared to the web 1.0 platform (Aghaei, Mohammad, and Farsani, 2012). The major characteristics of the web 2.0 platform include interoperability, information sharing capabilities, information sharing, and collaborations among users on the World Wide Web (Fifarek, 2007).
This technology comprises of two platforms of social computing and semantic technologies whereby the later technology facilitates human-machines cooperation by organizing large volumes of individuals into an online community. The former technology represents open standards that are applicable over on top of the web.Introduced in 2003 for purposes of defining structural data and linking them in a manner that allows effective automation, discovery, integration, and re-use across different applications.Another important characteristic for the web 3.0 involves the capability of linking, integrating, and analyzing data collected from different sets to create a new information stream (Aghaei, Mohammad, and Farsani, 2012).
Cloud computing refers to the use of software and hardware programs to deliver computing services over a network. It entrust services with the computation, software, and data of a user. There are several types of cloud computing open to both personal use and business use. They include desktop as service, security as a service, storage as a service, platform as a service, infrastructure as a service, and IT as a service. Cloud computing enables users to store personal files in remote locations; instead of a user storing files on server computers and hard drives, the concept of cloud computing allows the user to store files remotely in “clouds”(Rochwerger & Caceres, 2009). Cloud computing has numerous benefits when used in an organization. In the business or work environment, common examples of cloud computing may include: the use of web based email services, web communication tools, customer relationship management, software as a service, file backup, file synchronization, and file storage. With the advancement of the concept of cloud computing in the business environment, individual firms can have their own clouds with restrictions. This implies that a firm can own its own cloud for file storage and put limitations on who can or cannot have access to the files(Rochwerger & Caceres, 2009).
One of the major benefits of using cloud computing in the business environment is that it offers flexibility. This is because employees can be able to access the files stored remotely in the organization’s clouds from any location(Rochwerger & Caceres, 2009). As long as an employee does not have any restriction accessing company data, he or she can log in from any location whether in or out of office. It only takes internet connection for the employee to log in and access company data(Rochwerger & Caceres, 2009). At the same time, access can be granted using any device that can access the internet including mobile phones, tablets, laptops, desktops, and servers. Employees also have the opportunity to work together on documents without necessarily having to be physically present(Rochwerger & Caceres, 2009). Documents and files can be edited, reviewed, and viewed simultaneously from different locations.
Secondly, cloud computing is easy to set and keep running. For instance, it is very easy for individual employees to set up user accounts in various online platforms that support file storage and sharing(Rochwerger & Caceres, 2009). All that is needed in cloud computing is simply an internet connection and a computer device. Cloud computing is also cheap for companies. It is not labor intensive and a company does not need to install very expensive software programs. All the software and hardware programs are already installed remotely and a business only needs to run the resource online. Many of the applications for cloud computing come at no cost(Rochwerger & Caceres, 2009). This means that a company does not have to spend money buying external hard disks and server computers but simply utilize the various forms of cloud computing. Another advantage is the fact that cloud computing offers clients with an unlimited protection. It also allows IT to shift focus(Rochwerger & Caceres, 2009). This means that a client or a firm does not have to bother about updating their server computers and other issues relating with computing. The focus is mainly shifted to increasing and improving innovation(Rochwerger & Caceres, 2009).
Question 4: Case Study
4 (a): computer security
It is often said that the weak link in computer security is not the technology itself but the people who consume the technology. Like in the case of Mary, many people find themselves putting down their guard to allow attackers to hack into their accounts. In most cases, this happens when people are distracted from their work or when they are generally tired with work. These situations will make the workers to feel intimidated or simply just make honest mistakes(White, 2009). Attackers would use some socially engineered schemes to get any sensitive information from the people. Just as in the cases of Mary, the attacker used the pretense that he is calling from the consulting firm that is responsible for providing internet and network solution to Mary’s company(Ministry of Communications and Information Technology, 2011). The conversation was structured in such a way that Mary would not figure out that she is being requested to provide some vital information sufficient to hack into her account. According to an article on science daily, software and hardware programs can only do so much in providing security to data. However, it is the responsibility of the people such as Mary to ensure that they follow best practices in safeguarding their computers through passwords and authentication(Lerner & Lerner, 2000).
Since passwords have been compromised severally, one can login into a computer system in other ways. Some of these mechanisms are more secure compared to the traditional use of passwords however the mechanisms have still not been made universally available for use. Biometric is one such alternative and it uses authentication based on the unique personal characteristics of an individual. The error rate for biometrics is high without the use of additional hardware to scan biological characteristics such as figure prints(Lerner & Lerner, 2000). The other method for authentication into a computer system is the use of non-text passwords. This may include the use of images, graphical illustrations, color, special characters, and digits instead of the conventional text passwords(Lerner & Lerner, 2000).