Home » TECH-TALK

Category Archives: TECH-TALK

Evolution from 1G to 4G LTE, Understanding the Mobile Technologies

Long-term evolution (LTE) aka 4G, is a quickly rising common universal technology that’s constantly evolving to offer us unmatched data rates, higher capacity, and new levels of user experience.

As reported by Qualcomm by 2019, 65% of the world’s population is forecasted to have LTE coverage.

To bring 5G to execution, the electronics industry is working to get the most out of every bit of spectrum across a widespread band of the available spectrum from low bands below 1 GHz, to mid bands from 1 GHz to 6 GHz, to high bands known as mmWave.

This will enable us of virtually 11 GHz of the spectrum, 3.85 GHz of licensed spectrum and 7 GHz of unlicensed spectrum making accessible more spectrum for wireless/mobile broadband than ever before. 5G technologies will take advantage of ultra wide bandwidths to meet the rising demands for great speed, quality performance and network capacity. New advanced 5G technologies, however, will change just like 3G to 4G and further to 5G, including advanced antenna technologies to mobilize mmWave.

gen

Let’s have a glimpse on the evolution of the mobile and communication technology generations.

1G established seamless mobile connectivity introducing mobile voice services.

  • Mobile 1G established the foundation of mobile cellular phones.
  • Commercial mobile communications systems first appeared in the early 1980’s (1G). These systems were based on analog transmission and a relatively small proportion of the population had access to these systems.
  • Licensed Spectrum: Cleared spectrum for exclusive use by mobile technologies. Operator-deployed base stations provide access for subscribers.
  • Frequency Reuse: Reusing frequencies without interference through geographical separation. Neighboring cells operate on different frequencies to avoid interference.
  • Mobile Network: Coordinated network for seamless access and seamless mobility. Integrated, transparent backhaul network provides seamless access.
  • AMPS, NMT, TACS techniques were standardized.

Mobile 1G was amazing, but limited:

  • Some flaws with these systems were that they did not provide a great deal of security and standardization was not controlled particularly well.
  • Limited Capacity: Analog voice consumed channel 1 call per channel. Support for only 1 user per channel. Frequency Division Multiple Access (FDMA) support for only 1 user (analog phone call) per channel.
  • Requires large gap of spectrum between users to avoid interference. Analog transmissions are inefficient at using limited spectrum.
  • Limited Scalability: Analog devices are large/heavy, power inefficient, and high cost having limited scope of scaling.

2G digital wireless technologies increased voice capacity. Delivering mobile voice services to the masses – more people, in more places.

  • The first 2G systems were standardized and deployed in the early 90’s. The GSM (Global System Mobile) standard has come to dominate but other co-existing technologies also operate such as CDMAone.
  • 2G systems were developed primarily for voice communications and incorporated circuit-switching technology. Later, some data capability has been added on with SMS (Short Message Service) and WAP (Wireless Application Protocol) but these were quite limited in terms of functionality and available capacity.
  • In a GSM network mobile phones are in constant contact with the nearest BTS (Base Terrestrial Station) which is connected to a BSC (Base Station Controller) along with several other BTSs). The network of BSCs reports to a MSC (Master Station Controller) which manages the communications. It performs a number of functions including mobility management, authentication, encryption, billing, connectivity to PSTN (Public Switched Telephone System), etc.
  • Initial 2G technologies D-AMPS and GSM were based on TDMA, while cdmaOne was based on Code Division Multiple Access.
  • Early Mobile 2G technologies enabled more users per channel. Digital transmissions enable compressed voice and multiplexing multiple users per channel. Time Division Multiple Access (TDMA) allows multiple users per radio channel with each user talking one at a time.
  • D-AMPS technique standardized as IS-54 by TIA in 1992 was used mainly in North America enabled three users per radio channel (No longer utilized). Digital voice compressed into smaller “packages”.
  • GSM technique standardized by ETSI in 1990 (phase 1) initiated in Europe enabled eight users per radio channel is still widely used today . It provides Simple data services with GPRS. GSM can offer a maximum of 9.6 Kbps per channel which is sufficient for voice but not suitable for any substantial data traffic.
  • GPRS allows for a GSM channel to be divided into 8 streams of 13 Kbps each and for concurrent transmission of packets along each to yield a maximum of around 100 Kbps. Another feature of GPRS is that it allows for simultaneous data transfer and voice calls.
  • TDMA still required large frequency gaps to reduce interference, also required potentially unreliable “hard” handoffs. Switch channels between adjacent cells made it prone to dropped calls. It also had rigid delivery schedule whether or not the user is actively talking.
  • Code Division Multiple Access (CDMA) developed by Qualcomm enabled users to share the same frequency and communicate at the same time. Multiple users can talk at same time using different languages (“codes”). Each user’s information is coded with unique code. No rigid delivery schedule so delivery truck can take advantage when user is not talking to support more callers.
  • CDMA utilizes all the available spectrum to support more users it has ability to support many more users (>10x 1G) with same spectrum.
  • In February 1990 First CDMA field trial completed by Qualcomm and NYNEX.
  • Qualcomm developed a new CDMA standard (IS-95) referred to as cdmaOne in May 1995 which was commercially deployed in December 1995.
  • GPRS has actual speeds considerably lower than the 100 Kbps mark. EDGE (Enhanced Data Rates for GPRS Evolution) was an attempt to improve data rates and deliver up to 384 Kbps. It represents another step towards 3G.
  • Scalable Technology: Digital components cost and weight is far less and they deliver more secure signal with ample scope of further scaling using semiconductor large scale integration.
  • Due to GSM and CDMA, more and more people had a mobile subscription.

3G optimized mobile for data enabling mobile broadband services, and is evolving for faster and better connectivity.

  • 3G extends mobile services to include multi-rate multimedia content.
  • CDMA is the foundation for Mobile 3G technologies. Mobile 3G evolved into two competing standards both based on CDMA. CDMA2000/EV-DO, WCDMA/HSPA+ and TD-SCDMA techniques evolved.
  • Using 3G technology consumers were introduced to mobile broadband internet access in the home and office.
  • Amazing innovations in device technology resulted in the era of the smartphone.
  • 3G techniques uses higher order adaptive modulation to get more bps per Hz for users with good signal quality with increased peak data rates.
  • Optimizes channel by scheduling users at the time instances when users have good radio signal conditions (with fairness) Increases overall capacity.
  • CDMA2000 uses 1.25 MHz carrier; which enabled easy migration from cdmaONE. Qualcomm introduces EV-DO in 1999 (Evolution-Data Optimized), by optimized data channel for CDMA2000 providing mobile broadband services launched commercially in 2002.
  • WCDMA (UMTS) Uses 5 MHz carrier; leverages GSM core network was introduced in 2004. HSPA (High Speed Packet Access) Optimized data channel for WCDMA providing mobile broadband services. First DC-HSPA+ (42 Mbps) was commercially launched in 2010.
  • While in 2G GSM data rate was <0.5 Mbps it reached up to 14.7 Mbps in CDMA2000/EV-DO having Data optimized channel with support for larger package sizes enabling richer content.
  • 3G techniques uses higher order adaptive modulation to get more bps per Hz for users with good signal quality with increased peak data rates. 64-QAM enabled 50% more bits per second per Hz (bps/Hz)
  • Optimizes channel by scheduling users at the time instances when users have good radio signal conditions (with fairness) Increases overall capacity. Aggregating spectrum enabling increased user and peak data rates.
  • 3G Techniques delivered achievable throughput >2 Mbps Reduced operator cost for data services Continuous evolution for enhanced services.

4G LTE delivers more capacity for faster and better mobile broadband experiences and is also expanding into new frontiers.

  • Mobile 4G LTE is the first global standard for mobile broadband.
  • It enables two modes with common standard and same ecosystem LTE FDD & LTE TDD.
  • Unpaired spectrum enables asymmetrical DL/UL for more DL capacity
  • Frequency Division Duplex (FDD) Paired spectrum enables better coverage and Time Division Duplex (TDD) unpaired spectrum enables asymmetrical DL/UL for more DL capacity.
  • Multimode 3G/LTE is the foundation for successful 4G LTE.
  • 4G LTE is providing more data capacity for richer content and more connections. 3G is enabling a consistent broadband experience outside 4G LTE coverage Delivering ubiquitous voice services and global roaming.
  • Flexible support for wider channels up to 20 MHz enabled with OFDMA supporting more users.
  • Create spatially separated paths with more antennas using advanced MIMO techniques to create spatially separated paths; 2×2 MIMO mainstream.
  • Aggregate channels for higher data rates up to 100 MHz for higher data rates.
  • Connect in Real-time with Simplified Core Network, all IP network with flattened architecture resulting in less equipment per transmission.
  • Low Latencies, optimized response times for both user and control plane improves the user experience.

to have a summarized view a  comparison table of various features evolved may be seen as follows.

Comparison table

1g to 4g
Source: Internet and Qualcomm

SPICE (Simulation Program with Integrated Circuit Emphasis)

Circuit simulation is a technique to predict the behavior of a real circuit using a computer program. It replaces real components with predefined electrical models. It is not possible to conceder all the physical processes (in circuit level simulation) in the parts and all PCB parasitic so it will only reflect the specific model that is put into it. This is the reason behind that simulators can’t substitute bread boarding and prototyping. But they allow measurements of internal currents, voltages and power that in many cases are virtually not possible to do any other way.

SPICE (Simulation Program with Integrated Circuit Emphasis) is a general-purpose open source analog electronic circuit simulator. It is a powerful program that is used in integrated circuit and board-level design to check the integrity of circuit designs and to predict circuit behavior.

Integrated circuits, unlike board-level designs composed of discrete parts, are impossible to breadboard before manufacture. Further, the high costs of photo lithographic masks and other manufacturing prerequisites make it essential to design the circuit to be as close to perfect as possible before the integrated circuit is first built. Simulating the circuit with SPICE is the industry-standard way to verify circuit operation at the transistor level before committing to manufacturing an integrated circuit.

Board-level circuit designs can often be bread boarded for testing. Even with a breadboard, some circuit properties may not be accurate compared to the final printed wiring board, such as parasitic resistances and capacitance. These parasitic components can often be estimated more accurately using SPICE simulation. Also, designers may want more information about the circuit than is available from a single mock-up. For instance, circuit performance is affected by component manufacturing tolerances. In these cases it is common to use SPICE to perform Monte Carlo simulations of the effect of component variations on performance, a task which is impractical using calculations by hand for a circuit of any appreciable complexity.

Circuit simulation programs, of which SPICE and derivatives are the most prominent, take a text netlist describing the circuit elements (transistors, resistors, capacitors, etc.) and their connections, and translate this description into equations to be solved. The general equations produced are nonlinear differential algebraic equations which are solved using implicit integration methods, Newton’s method and sparse matrix techniques

SPICE was developed at the Electronics Research Laboratory of the University of California, Berkeley by Laurence Nagel with direction from his research advisor, Prof. Donald Pederson. SPICE1 was largely a derivative of the CANCER program, which Nagel had worked on under Prof. Ronald Rohrer. CANCER was an acronym for “Computer Analysis of Nonlinear Circuits, Excluding Radiation,” a hint to Berkeley’s liberalism of 1960s: at these times many circuit simulators were developed under the United States Department of Defence contracts that required the capability to evaluate the radiation hardness of a circuit. When Nagel’s original advisor, Prof. Rohrer, left Berkeley, Prof. Pederson became his advisor. Pederson insisted that CANCER, a proprietary program, be rewritten enough that restrictions could be removed and the program could be put in the public domain.

SPICE1 was first presented at a conference in 1973. SPICE1 was coded in FORTRAN and used nodal analysis to construct the circuit equations. Nodal analysis has limitations in representing inductors, floating voltage sources and the various forms of controlled sources. SPICE1 had relatively few circuit elements available and used a fixed-time step transient analysis. The real popularity of SPICE started with SPICE2 in 1975. SPICE2, also coded in FORTRAN, was a much-improved program with more circuit elements, variable time step transient analysis using either the trapezoidal (second order Adams-Moulton method) or the Gear integration method (also known as BDF), equation formulation via modified nodal analysis (avoiding the limitations of nodal analysis), and an innovative FORTRAN-based memory allocation system developed by another graduate student, Ellis Cohen. The last FORTRAN version of SPICE was 2G.6 in 1983. SPICE3 was developed by Thomas Quarles (with A. Richard Newton as advisor) in 1989. It is written in C, uses the same netlist syntax, and added X Window System plotting.

As an early open source program, SPICE was widely distributed and used. Its ubiquity became such that “to SPICE a circuit” remains synonymous with circuit simulation. SPICE source code was from the beginning distributed by UC Berkeley for a nominal charge (to cover the cost of magnetic tape). The license originally included distribution restrictions for countries not considered friendly to the USA, but the source code is currently covered by the BSD license.

SPICE inspired and served as a basis for many other circuit simulation programs, in academia, in industry, and in commercial products. The first commercial version of SPICE was ISPICE, an interactive version on a timeshare service, National CSS. The most prominent commercial versions of SPICE include HSPICE (originally commercialized by Shawn and Kim Hailey of Meta Software, but now owned by Synopsys) and PSPICE (now owned by Cadence Design Systems). The academic spinoffs of SPICE include XSPICE, developed at Georgia Tech, which added mixed analog/digital “code models” for behavioural simulation, and Cider (previously CODECS, from UC Berkeley/Oregon State Univ.) which added semiconductor device simulation. The integrated circuit industry adopted SPICE quickly, and until commercial versions became well developed many IC design houses had proprietary versions of SPICE. Today a few IC manufacturers, typically the larger companies, have groups continuing to develop SPICE-based circuit simulation programs. Among these are ADICE at Analog Devices, LTspice at Linear Technology, Mica at Freescale Semiconductor, and TISPICE at Texas Instruments. (Other companies maintain internal circuit simulators which are not directly based upon SPICE, among them PowerSpice at IBM, Titan at Qimonda, Lynx at Intel Corporation, and Pstar at NXP Semiconductor.)

SPICE became popular because it contained the analyses and models needed to design integrated circuits of the time, and was robust enough and fast enough to be practical to use. Precursors to SPICE often had a single purpose: The BIAS program, for example, did simulation of bipolar transistor circuit operating points; the SLIC program did only small-signal analyses. SPICE combined operating point solutions, transient analysis, and various small-signal analyses with the circuit elements and device models needed to successfully simulate many circuits.

Some of the popular circuit simulators are as follows:

1.            ASTAP

2.            Advanced Design System

3.            CircuitLogix

4.            CPU Sim

5.            GNU Circuit Analysis Package

6.            Gpsim

7.            ICAP/4

8.            List of free electronics circuit simulators

9.            Logisim

10.          Micro-Cap

11.          NI Multisim

12.          National Instruments Electronics Workbench Group

13.          Ngspice

14.          PSpice

15.          PowerEsim

16.          Quite Universal Circuit Simulator

17.          SPICE

18.          SapWin

19.          SmartSpice

20.          SNAP (software)

21.          Spectre Circuit Simulator

22.          SpectreRF

These simulators differs each other and are generally application specific. Most popular version of spice simulators for analog circuit simulations are PSpice offered by MicroSim but now incorporated in OrCAD of Cadence and National Instruments Multisim.

“INS ARIHANT” INDIA’s First Nuclear Submarine

India launched its first nuclear-powered submarine in a ceremony in southern port city of Vishakhapatnam 0n 26 July 2009, becoming one of just six nations in the world to have successfully built one. The 367-foot long INS Arihant, which means “Destroyer of the Enemies” in Hindi according to the official news release. The name Arihant has its origins in the Jain religion, and unofficial news reports stating “Destroyer of Enemies” omitting the definite article. India became the sixth country in the world to have built one. Besides the US, which has 74 nuclear submarines, Russia (45), UK (13), France (10) and China (10) also possess nuclear-powered submarines – the US has nearly as many nuclear submarines as all other countries combined.

India is a nation that struggled to enter the select group of countries that build nuclear powered submarines. Its program ATV, or Advanced Technology Vessel, was initiated in 1974. But after three decades it had not presented results that could modify the current picture of the navies with nuclear propulsion.

The INS Arihant, India’s first nuclear submarine that was till now known by the code name S 2, was launched at a simple ceremony in the port town of Visakhapatnam [Vizac] with the traditional breaking of a coconut on its hull by Prime Minister Manmohan Singh’s wife, Gursharan Kaur. It was expected to be ready for induction into the Navy by 2011 after a series of exhaustive trials.

The launch ceremony was attended by the prime Minister. Dr. Manmohan Singh, accompanied by Smt. Gursharan Kaur, Raksha Mantri Shri.AK Antony, Chief minister of Andhra Pradesh Dr. YS Rajasekhar Reddy, Raksha Rajya Mantri Shri MM Pallam Raju, Minister of State for Human Resource Development, Smt. D Purandareswari, Chief of the Naval Staff Admiral Sureesh Mehta and high ranking officials from the Navy, Department of Atomic Energy, and Defence Research and Development Organisation.

FREE SEMINARS AND TECHNICAL PPT PRESENTATIONS

On this occasion, the Prime Minister congratulated the Director General of the ATV (Advanced Technology vehicle) Program Vice Admiral DSP Verma (Retd) and all personnel associated with it for achieving this historic milestone in the country’s defence preparedness. He noted that they had overcome several hurdles and barriers to enable the country to acquire self reliance in the most advanced areas of defence technology. The Prime Minister made a special mention of the cooperation extended by Russia. The Prime Minister stated that the Government is fully committed to ensuring the Defence of our national interests and the protection of our territorial integrity. The Government would render all support to the constant modernization of our defence forces and to ensuring that they remain at the cutting edge of technology.

The project director, Vice Admiral (retd) D S P Verma, said that the Arihant is a 6,000-tonne submarine with a length of 110 meters and a breadth of 11 meters. The length is about 10 percent longer than previously published estimates, while the 11 meter beam is much less than the 15 meters of previous un-offcial estimates. Experts say the vessel will be able to carry 12 K 15 submarine launched ballistic missiles that have a range of over 700 km. The Indian nuclear powered attack submarine design was said in some reports to have a 4,000-ton displacement and a single-shaft nuclear power plant of Indian origin. By other accounts it would be 9,400 tons displacement when submerged and 124 meters long.

The MoD/PMO decided not to release any photographs of the submarine, and no filming or photography by the media was permitted inside the Matsya Dock. One report stated that the submarine was visibly based on the Russian Borei-class SSBN, and claimed that the official invitation had a silhouette of the submarine indicating that it’s almost definitely based on the Borei. But the 935 Borei has a length of 170 meters (580 feet), a beam of 13 meters (42 feet), and a displacement of 11,750-12,250 tons Surfaced and 17,000 tons Submerged.

India has been working actively since 1985 to develop an indigenously constructed nuclear-powered submarine, one that was possibly based on elements of the Soviet Charlie II-class design, detailed drawings of which are said to have been obtained from the Soviet Union in 1989. This project illustrates India’s industrial capabilities and weaknesses. The secretive Advanced Technology Vessel (ATV) project to provide nuclear propulsion for Indian submarines has been one of the more ill-managed projects of India.With the participation of involved Russian scientists and technician in the diverse phases of the program, came the possibility of that the first Indian submarine with nuclear propulsion can be operational in 2009, having been launched in 2006-2007.

Although India has the capability of building the hull and developing or acquiring the necessary sensors, its industry had been stymied by several system integration and fabrication problems in trying to downsize a 190 MW pressurized water reactor (PWR) to fit into the space available within the submarine’s hull. The Proto-type Testing Centre (PTC) at the Indira Gandhi Centre For Atomic Research. Kalpakkam, was used to test the submarine’s turbines and propellers. A similar facility is operational at Vishakapatnam to test the main turbines and gear box.

In 1998, L&T began fabricating the hull of ATV but the struggle with the reactor continued. After BARC designs failed, India bought reactor designs from Russia. By 2004 the reactor had been built, tested on land at the IGCAR and had gone critical. Its modest size, around 6,000 tons (the Ohio class SSBN in the movie Crimson Tide weighs over 14,000 tons), led experts to call it a “baby boomer”.

India had ample experience building Pressurised Heavy Water Reactors (PHWRs) using natural un-enriched uranium as fuel, and heavy water as moderator and coolant. But this was the first time that India has built a PWR that used enriched uranium as fuel, and light water as both coolant and moderator. The electrical power reactors that India would be importing (potentially from Russia, France, and the US) would also be PWRs with enriched uranium as fuel, and light water as both coolant and moderator. Naval nuclear reactors typically use uranium that is enriched to much higher levels than is the case with shore-based power reactors.

While the present project reportedly ends at three units, defence officials have not ruled out building larger submarines on the basis of national strategic imperatives. These have changed since the conception of the project. By the time the first unit was launched in July 2009, the construction of the hull for the next one was reportedly already underway at the Larson and Toubro (L&T) facility at Hazira where the first hull was built. The cost of the three submarines was reported at over Rs3,000 crore, over US$600,000,000 [the Indian numbering system is denominated in Crore 1,00,00,000 and Lakhs 1,00,000, so Rs3,000 crore is Rs30,000,000,000, or US$623,104,807.77 the day INS Arihant was launched]. Another report said that the first submarine alone had cost Rs. 14,000 Crore [$US2.9 billion]. In April 2006, the larger American Virginia-class subs were priced at $2.4 billion apiece, at which time the goal was to cut the program’s cost to about $2 billion per sub. The $2 billion figure is a baseline expressed in fiscal 2005 dollars. As of late 2008 the Procurement Cost for the first three units of the British Astute class SSN was forecast at £3,806 M (outturn prices) [US$6,275 B at 2009 conversion rates], for a unit cost of about US$2.1 billion.

The three submarines would be based at a facility being developed at Rambilli close to Vishakpatnam, where hundreds of acres of land had already been acquired. The Indian Navy hoped to commission the base by 2011 in time for INS Arihant’s commissioning, and two of these submarines would be at sea at any given time while the third would be in maintenance at the base. Other reports claim that India plans to build a fleet of five nuclear-powered submarines. On report in 2009 stated that the government had given clearance for the construction of much bigger SSBNs, nuclear-powered submarines capable of launching ballistic missiles, each of them costing about $2 billion (approximately Rs 10,000 crore each). This would take off once the three Arihant class submarines were ready.

By 2004 it was reported that the first ATV would be launched by 2007. At that time it was reported that it would be an SSGN and displacing some 6,500 tons, with a design derivative of Russia’s Project 885 Severodvinsk-class (Yasen) SSN. The ATV multirole platform would be employed for carrying out long-distance interdiction and surveillance of both submerged targets as well as principal surface combatants. It would also facilitate Special Forces operations by covertly landing such forces ashore. The ATV pressure hull will be fabricated with the HY-80 steel obtained from Russia.

This would have the possibility of multiple performance: it could use missiles of cruise of average reach (1,000 km), ballistic missiles of short reach (300 km), torpedoes and mines, besides participating of operations special.

The ATV is said to be a modified Akula-I class submarine. The Russian Akula-2 and Yasen are also modified Akula-1. By this line of reasoning the ATV would be in league of Yasen, so the ATV would be 6500 tons light, 8500 tons armed and surfaced and 10000 tons submerged. It would be the biggest and heaviest combat naval vessel built in India to date.

The 100-member crew, which will man the submarine, was trained at an indigenously-developed simulator in the School for Advanced Underwater Warfare (SAUW) at the naval base in Vizag. Hands-on training will be done on the INS Chakra, a 12,000-tonne Akula-II class nuclear-powered attack submarine being taken on a 10-year lease from Russia. SBC in Vizag is to become the assembly line for three ATVs, costing a little over Rs 3,000 crore each or the cost of a 37,000 ton indigenous aircraft carrier built at the Cochin Shipyard. Larsen and Toubro (L&T) has begun building the hull of the second ATV at its facility in Hazira, to be inducted into the navy by 2012.

As of 2007 the first of the five long-delayed ATVs was scheduled to be fully-ready by 2010 or so. In August 2008 it was reported that on January 26, 2009, the sluice gates of an enclosed dry-dock in Visakhapatnam were to be opened and the world was to take its first look at India’s first nuclear-powered submarine, the Advanced Technology Vessel (ATV), as it entered the waters.

In February 2009 defence minister A K Antony confirmed that India’s nuclear-powered submarine is in the final stages. “The Advanced Technology Vessel (ATV) project is in the final stage. We had some problems with the raw material in the initial phase. But now the project is in its final stage,” he said at the ongoing Aero-India show. This was a rare admission by the defence minister – not only on the existence of the secretive project to build an indigenous nuclear submarine, but also on its developmental status. The submarine, modelled on the Russian Charlie class submarine, is slated for a sea trial in 2009. Officials in the navy and atomic energy department are hopeful of meeting the deadline this time. In the long run, the government plans to buy three nuclear submarines to provide the navy with capability to stay underwater for a very long time. Though defence and nuclear sccientists have been working on this project since 1985, they had initial setbacks with the material and miniaturisation of the nuclear reactor whih will be fitted into the submarine’s hull.

“Do Bussiness with Style” cool new templates for Microsoft office 2007

Are u get bored using traditional old Microsoft office environment ,then here is something new. Microsoft Small Business is now giving out some cool free designer templates for Office 2007 for Windows, and Office 2008 for Mac. The best is that they comes in 9 colour schemes. These six templates contain a spreadsheet, presentation, invoice, letterhead, business card and newsletter blast templates.U can also get a 60 days free trial of Microsoft Office if you don’t have it. just click the link

Microsoft Office 2007 free templates

Don’t forget that , these templates will only work if you have the Office (2007 for Windows and 2008 for Mac) because the templates are in the new formats (.docx, .xlsx etc.). These templates are zip files and you can download them directly. No verification required.

List of Approved Post Graduate Education and Research institutions for M Tech & M E

Here is a link having information of seats of M Tech and M E in various colleges in India , although the document is little bit older but will be very helpful for basic idea of available courses and information about seats. click the link bellow to download the document.

List of Approved Post Graduate Education and Research institutions Upto 30th September, 2004 (M.E/M.Tech)

Save Energy Go green


Not with envy, but kindness to the environment, and alertness to your electricity bills
You leave your PC on all day. But you care for the environment, so you switch off the monitor. Good move, but did you know you are still wasting about 45 Watts with the CPU running?
That’s what Tufts University’s Climate Initiative says. And it also says that if you leave your PC on for the entire day, 850-1500 pounds of carbon dioxide is released into the atmosphere a year. And this means that you need 60-300 trees to absorb that much CO2 in a year.
That does get you thinking doesn’t it! Now, all that noise about climate change because of our insensitivity to the environment starts to make sense. We cut down trees, we waste electricity, we replace cellphones and gadgets with the latest ones, without bothering about what really happens to the old ones. While there could be debates on how much all this really impacts our environment, most of us know intuitively that what we are doing mindlessly is really not right, and is likely to have negative repercussions.
That is why the world over, the word Green is becoming red hot. Green computing is in. This means that you start to use computing resources efficiently. It’s not just about being good to Mother Nature, but also being able to save a lot of money being spent on electricity. For companies that have thousands and thousands of computers running, datacenters keeping their businesses up and running, all this can add to a huge fortune. In fact, going green is now gaining so much momentum that those companies which do not have green computing initiatives are seen as enemies of the environment.
So how does it matter to you? You might have one PC and a laptop at home, in addition to the multiple electronic devices you run. And you might even be considering another PC for the little one. Think if you really need all those PCs. Buy only if you have to. While buying, remember that laptops consume less power, so it could be wiser to go the portable way. Or if you need to look at a PC, opt for monitors that consume less power. LCD monitors need much less power than CRT ones. Look for Energy Star ratings and save energy.
Explore the power saving options of your PC and customize them to suit the way you work. If you often go away from your PC for a long time, you can set the monitor and hard disk to be switched off after a few minutes of no-activity.
To dispose old PCs and gadgets, get in touch with NGOs working in this area and figure out the best way to do so. Or if you own a branded PC or gadget, get in touch with the company and ask how this e-waste can be managed. Most computer vendors and cellphone makers have Green initiatives on, so you might get some help.
These are still early days for green initiatives for companies in India. But if you and I make a start, the rest will fall in place.

Chandrayaan-II to be in orbit by 2011-12

 COIMBATORE: Even as India’s maiden lunar probe circles the moon, the Centre has given its approval for Chandrayaan-II and it would be in orbit by 2012. 
ISRO has started necessary research workfor the next mission for which the Centre has sanctioned necessary funds, Chandrayaan-I Project Director Mylswamy Annadurai told reporters. The second mission would be a fully indigenous one, he said.  

Chandrayan-I is the best thing to happen to Indian space research and is designed to study the water availability and fertile standards of moon, he said.

The moon mission has proved that India is on par with any other nation which ventured to the earth’s satellite, he said.

Stating that the pictures being received from moon were giving very valuable inputs, he said steps are being made to get continuous pictures by making some technical corrections.

Earlier, the Scientist was felicitated by the public at different places in Coimbatore district for the successful launch of the country’s moon mission.

Accepting the felicitations, Annadurai exhorted the students to shelve their foreign dreams as opportunities were available within India.

The days of foreign students coming to India in pursuit of research works and higher studies were not far away, he said. 

source:www.indiatimes.com

Chandrayaan-I Impact Probe lands on moon

BANGALORE: India marked its presence on Moon on Friday night to be only the fourth nation to scale this historic milestone after a Moon Impact 

Probe with the national tri-colour painted successfully landed on the lunar surface after being detached from unmanned spacecraft Chandrayaan-1. ( Watch 

Joining the US, the erstwhile Soviet Union and the European Union, the 35-kg Moon Impact Probe (MIP) hit the moon exactly at 8.31 PM, about 25 minutes after the probe instrument descended from the satellite in what ISRO described as a “perfect operation”. 

 

Miniature Indian flags painted on four sides of the MIP signalled the country’s symbolic entry into moon to coincide with the birth anniversary of the country’s first Prime Minister Jawaharlal Nehru, observed as Children’s Day. 

 

“It will signify the entry of India on Moon,” an ISRO official said. 
The MIP is one of the 11 scientific instruments (payloads) onboard Chandrayaan-1, India’s first unmanned spacecraft mission to Moon launched on October 22 from Sriharikota spaceport. 
Developed by ISRO’s Vikram Sarabhai Space Centre of Thiruvananthapuram, the primary objective of MIP is to demonstrate the technologies required for landing a probe at the desired location on the moon. 
The probe will help qualify some of the technologies related to future soft landing missions. This apart, scientific exploration of the moon at close distance is also intended using MIP. 
During its 20-minute descent to the moon’s surface, MIP took pictures and transmitted them back to the ground. The first pictures are expected to be made public on Saturday. 
ISRO officials said Chandrayaan-1 detached the Moon Impact Probe as planned. 
US was the first country whose flag adorned the moon and the success of the MIP landing on the earth’s natural satellite is the first hardlanding on the moon in 32 years.
The spacecraft on Friday reached its final orbital home, about 100 kms over the moon surface after ISRO scientists successfully carried out the last critical orbit lowering operation. 
The MIP consists of a C-band Radar Altimeter for continuous measurement of altitude of the probe, a video imaging system for acquiring images of the surface of moon from the descending probe and a mass spectrometer for measuring the constituents of extremely thin lunar atmosphere during its 20-minute descent to the lunar surface. 
The MIP withstood the impact of a hardlanding after it hit the lunar surface. 
From the operational circular orbit of about 100 km height passing over the polar regions of the moon, it is intended to conduct chemical, mineralogical and photo geological mapping of the moon with Chandrayaan-1’s 11 scientific instruments (payloads). 
After the successful release of MIP,the other scientific instruments would be turned on sequentially leading to the normal phase of the two-year mission. 
 
 
 
source:www.indiatimes.com

HISTORY of ” Programmable Logic Controler “

   

 Origin

The PLC was invented in response to the needs of the American automotive manufacturing industry. Programmable controllers were initially adopted by the automotive industry where software revision replaced the re-wiring of hard-wired control panels when production models changed. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was accomplished using hundreds or thousands of relays, cam timers, and drum sequencers and dedicated closed-loop controllers. The process for updating such facilities for the yearly model change-over was very time consuming and expensive, as the relay systems needed to be rewired by skilled electricians. In 1968 GM Hydramatic (the automatic transmission division of General Motors) issued a request for proposal for an electronic replacement for hard-wired relay systems. The winning proposal came from Bedford Associates of Bedford, Massachusetts. The first PLC, designated the 084 because it was Bedford Associates’ eighty-fourth project, was the result. Bedford Associates started a new company dedicated to developing, manufacturing, selling, and servicing this new product: Modicon, which stood for MOdular DIgital CONtroller. One of the people who worked on that project was Dick Morley, who is considered to be the “father” of the PLC. The Modicon brand was sold in 1977 to Gould Electronics, and later acquired by German Company AEG and then by French Schneider Electric, the current owner. One of the very first 084 models built is now on display at Modicon’s headquarters in North Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after nearly twenty years of uninterrupted service. The automotive industry is still one of the largest users of PLCs, and Modicon still numbers some of its controller models such that they end with eighty-four.  

Development

Early PLCs were designed to replace relay logic systems. These PLCs were programmed in “ladder logic”, which strongly resembles a schematic diagram of relay logic. Modern PLCs can be programmed in a variety of ways, from ladder logic to more traditional programming languages such as BASIC and C. Another method is State Logic, a Very High Level Programming Language designed to program PLCs based on State Transition Diagrams. Many of the earliest PLCs expressed all decision making logic in simple ladder logic which appeared similar to electrical schematic diagrams. The electricians were quite able to trace out circuit problems with schematic diagrams using ladder logic. This program notation was chosen to reduce training demands for the existing technicians. Other early PLCs used a form of instruction list programming, based on a stack-based logic solver.  

 Programming

Early PLCs, up to the mid-1980s, were programmed using proprietary programming panels or special-purpose programming terminals, which often had dedicated function keys representing the various logical elements of PLC programs. Programs were stored on cassette tape cartridges. Facilities for printing and documentation were very minimal due to lack of memory capacity. The very oldest PLCs used non-volatile magnetic core memory.  

 Functionality

The functionality of the PLC has evolved over the years to include sequential relay control, motion control, process control, distributed control systems and networking. The data handling, storage, processing power and communication capabilities of some modern PLCs are approximately equivalent to desktop computers. PLC-like programming combined with remote I/O hardware, allow a general-purpose desktop computer to overlap some PLCs in certain applications.  

 Suppliers

Well known PLC brands include Siemens, Allen-Bradley, IDEC, ABB, Mitsubishi, Omron, Honeywell, Schneider Electric, Saia-Burgess Controls, and General Electric.

source:wikipedia.org

“TOUCH-SCREEN” general introduction

You must have seen touch screens at ATMs, cellphones, information kiosks .Touch screen based system allows an easy navigation around a GUI based environment.



Tap on the screen twice for  double clicks, and drag a figure on the screen to move the cursor .We can do almost everything the mouse does, and do away with the additional device(mouse) leaving only the screen.

So “They’re all the rage, they are the buzzword, they are probably the hottest thing in technological cosmos right now!”Falling price and improved technology has fueled the use of these devices in smaller consumer devices like mobile, Tablet PCs and handheld gaming consoles.

The components

There are four popular touch screen   technologies but all of them have three main components.

·        Touch sensitive surface

·        The controller

·        The software driver

LCD Framework LCD Theory

>>   The   touch sensitive surface is an extremely durable and flexible glass or polymer touch response surface, and this panel is placed over the viewable area of the screen. In most sensors there is an electric signal going across the screen, and a touch on the surface causes change in the signal depending on the touch sensor technology used .This change allows the controller to identify the location of the touch.

>> The controller is a device that acts as the intermediate between the screen and the computer .It interprets the electrical signal of the touch event to digital signal that computer can understand.  The controller can be placed with the screen or housed externally.

>> The software driver is an interpreter that converts what signal comes from the controller to information that the operating system can understand.

Touch screen sensor technologies

Resistive touch screens










Resistive touch screens have two glasses or acrylic layers, one of them is coated wit is conducting and other is coated with resistive material. When these layers brought in contact the resistance changes and change is registered and location of touch is calculated.

These are durable and resistant to humidity and liquid spills. But they offer limited clarity, and the surface can be easily damaged by sharp objects.

·       Capacitive touch screens

In these devices a glass panel with a coat of charge storing material on its surface .When the panel is touched, a small amount of charge is drawn at the point of contact. Circuit located at each corner of the screen measure the difference in charge and send information to the controller for calculating the position of touch.These are used where the clarity precision is of concern as in laptops and medical imaging

·        Acoustic wave touch screens

Touch Screen Acoustic Wave Technology

This newer technology uses ultrasonic waves that pass over the   screen   . When the panel is touched   , there is a change in the frequency of ultrasonic wave   and the receiver at end of the panel register this change.

Since only glass is used with no coating, there is nothing that wears out.

Infrared touch screens

Infrared touch screens are primarily used for large displays, banking machines, and in military applications.

Touch Screen Infrared Technology

Infrared touch screens are based on light-beam interruption technology. Instead of an overlay on the surface, a frame surrounds the display. The frame has light sources, or light emitting diodes (LEDs) on one side and light detectors on the opposite side, creating an optical grid across the screen.

When an object touches the screen, the invisible light beam is interrupted, causing a drop in the signal received by the photo sensors.

Optical touch screen

Touch Screen Optical Technology

Optical touch screen technology is ideal for large LCD and plasma displays up to 100″ diagonal.

Optical touch screen technology uses two line scanning cameras located at the corners of the screen. The cameras track the movement of any object close to the surface by detecting the interruption of an infra-red light source. The light is emitted in a plane across the surface of the screen and can be either active (infra-red LED) or passive (special reflective surfaces).

Dispersive Signal Technology

Dispersive Signal Technology (DST) consists of a chemically-strengthened glass substrate with piezos mounted on each corner, mated to a sophisticated, dedicated controller. The DST Touch System determines the touch position by pinpointing the source of “bending waves” created by finger or stylus contact within the glass substrate. This process of interpreting bending waves within the glass substrate helps eliminate traditional performance issues related to on-screen contaminants and surface damage, and provides fast, accurate touch attributes.

Dispersive Signal Technology (DST)

·      Other technologies

The above are the main technologies, there are others as well like strain gauge and Microsoft’s surface technology etc .Through surface computing you can seamlessly synchronize electronic devices that touch its surface.

Advantage and disadvantage

With improvement of technology of touch screens, precise pointing is   also possible. Touch screens with glass surface can resist dirt, grease, moisture and also household cleaning agents.

Touch screens have some disadvantages  like people with fat fingers may mishit keys , needs to be handle  carefully .Touch screens are complex items and unlike keypad there are many   things which may go wrong .

Future prospect

Many large companies like Microsoft and Apple   have gotten on   the touch screen   bandwagon, multi touch screens in particular. Apple has patented an “Integrated Sensing Display” wherein display elements are   integrated with image sensing elements. If this will put to use, you could have a single device that looked like a monitor for video conferencing, wherein the monitor would be a camera!

Page copy protected against web site content infringement by Copyscape