Saturday, August 9, 2008

Intel's Barrett: U.S. Woes Not Hurting PC Market

LISBON -- Intel, the world's biggest microchip producer, expects no slowdown in global demand for personal computers despite economic problems in the United States and in other countries, its chairman, Craig Barrett, said on Wednesday.
He also told reporters here in Lisbon, where he was to sign a draft deal with the Portuguese government to make 500,000 cheap portable computers for schools, that the company was upbeat on demand prospects for low-cost computers and broadband wireless systems.
"We gave a relatively upbeat business forecast, saying that despite the economic problems in the United States, our business is so international that we didn't see any slowdown in the PC market," he said. Intel (NASDAQ: INTC) posted a quarterly jump in profits during its most recent earnings report.
Barrett said a number of economies have not been seriously affected by the U.S. slowdown, providing hope that the crisis will have limited implications.
"We are seeing ... that the slowdown in the U.S. hasn't spilled everywhere else. The world's economy is not as robust as it could be, but it's not a disaster."
Apart from broadband wireless, and the next generation of low-cost computers, Intel also remains bullish about the introduction of more digital capability in health care.
"There's a huge opportunity to use it not just in the back-office but in remote diagnostics," he added.
Referring to the European Union's recent antitrust charges against Intel, Barrett said price reductions for microprocessors and computers have an "anti-inflationary nature" while prices are rising globally and also said that was a testimony to high competition in the sector.
"It looks as the market is functioning as it should, because every year consumers are getting more for less," he said. "We continue to say that -- please just look at the facts, don't just listen to a competitor complaint."
The response comes amid ongoing criticism over Intel's competitive practices, particularly as they relate to smaller rival AMD, whose market position is at the heart of the European Commission's charges.
Most recently, during AMD's (NYSE: AMD) most recent quarterly earnings call, the company's chairman, Hector Ruiz, said he would work to break "our industry from the grip of an illegal monopoly."
Last year, the Commission accused Intel of giving computer makers rebates to limit their use of AMD's chips or avoid them altogether.
The Commission issued additional charges against Intel earlier this month, saying the U.S. company had paid a retailer to refrain from selling computers with chips made by AMD.
Intel lawyers have previously said that that new charges filed against the company by the European Commission could lead to higher prices for consumers.
At any rate, while Intel remains flying high, its rival continues seeing still more losses. Earlier this month, AMD posted a loss of $1.189 billion on sales of $1.35 billion.
At the same time, Ruiz also announced that he would step down from the position of CEO.
Ruiz Steps Down as AMD CEO, Meyer Ascends
After seven quarters of bleeding money, Advanced Micro Devices will have a new CEO.
Hector Ruiz has stepped down as chief executive of the troubled microprocessor company, with Dirk Meyer, president and chief operating officer, taking over the reigns.
The news was announced on a conference call to discuss AMD's (NASDAQ: AMD) second-quarter financials, which were not pretty. The company reported a loss of $1.189 billion, or $1.96 per share, on sales of $1.35 billion.
"The time is right," Ruiz said today during the company's earnings call. "Barcelona is shipping, the conversion to 45 nanometer is on track," he added, referring to the much-delayed launch of AMD's quad-core Opterons and the company's upcoming smaller chip designs.
He also said the company has made progress on its efforts to streamline its business operations in connection with foundry partners, a plan it calls "Asset Smart".
"This is why the time is right to turn the company over to a new leader, one who has earned the trust of AMD partners and customers worldwide," he said.
Ruiz, 62, joined AMD as president and chief operating officer in January 2000 and became AMD's chief executive officer on April 25, 2002. He has served on AMD's board of directors since 2000 and was appointed chairman of the board of directors in 2004.
Meyer, 46, joined AMD in 1995 as part of the design team for the original AMD Athlon processor. He worked his way up to president and COO in 2006. He holds more than 40 patents as an engineer.
Ruiz will stay on as executive chairman and chairman of the board, working on the asset smart strategy and continuing the battle against Intel by "breaking our industry from the grip of an illegal monopoly," as he put it.

Sun, Intel Push Optimized Solaris

SAN FRANCISCO – Eighteen months after Sun Microsystems and Intel made peace and announced plans for Intel-based servers as well as working together on Intel-optimized software, the two companies held a briefing with reporters here Tuesday to update their progress and future direction.
Much of their work has centered around optimizing OpenSolaris, the experimental version of Solaris (Sun's Unix variant) where new features are tested and debugged before being added to the official Solaris product that is shipped with Sun servers. That deal is bearing fruit already in helping Sun (NASDAQ: JAVA) optimize Java for Solaris.
"The January '07 deal gained us access to Intel's architecture to do things in Solaris we could not do before," said Herb Hinstorff, director of marketing for Sun Solaris. "The first year was all about getting to know each other. This year, we are shipping Intel optimizations in Solaris now and Intel is a contributing member to OpenSolaris."
Dave Stewart, a software engineering manager at Intel (NASDAQ: INTC), added: "It has been tremendous to see the results of the collaborative effort of marrying Solaris with Xeons. We're now working with the Sun xVM team to deliver some virtualization optimizations."
Beyond the CPU
But the work isn't just on the CPU level. Andy Roach, senior director of x64 engineering for Solaris, added "We're working on more than just CPUs. We're working to deliver full Intel-based solution stacks to Solaris users." He said Intel developers are frequently seen in Sun offices and vice versa.
Sun gives away OpenSolaris and new versions are released every six months, as opposed to the multi-year gap between Solaris releases. After the developer community has had a chance to experiment with technologies on OpenSolaris, such as virtualization, power management and CPU optimization, those technologies will migrate to the enterprise product.
The two companies made Penryn optimizations to OpenSolaris and Hinstorff said they hope to have all of the enabling technologies for Nehalem, Intel's new processor architecture due by the end of the year, in the next release of OpenSolaris.
There will have to be optimizations because Nehalem will represent a major change in architecture over the current Xeon design. The memory controller will be on the CPU, noted Roach, which will mean a major drop in memory latency and improvement in performance. Nehalem will also use a different kind of memory, DDR3.
The two companies are also working on Java performance tuning. Within the first six months of the alliance by the two firms, Java performance had improved 20 percent, and at JavaOne earlier this year, they announced a 68 percent benchmark improvement. The Java groups at Sun and Intel are working on optimizations across multiple operating systems, with the goal "to have best performing Java on Intel as possible," said Hinstorff.
IDC vice president Jean Bozman said the alliance is definitely starting to benefit Sun. "Sun has seen an improvement in its x86 sales, but there are multiple reasons for it," she told InternetNews.com. "One reason is they are now able to tap into a very large segment of the marketplace where they hadn't been participating before. By optimizing Solaris for Intel, it could only make things better for them."

HP, Intel, Yahoo Team on Cloud Computing Labs

Is there gold in them thar clouds? Yahoo, Intel and HP seem to think so.
The trio of tech titans announced an ambitious research initiative today focused on studying the software, hardware and datacenter management issues surrounding cloud computing. The group's new Cloud Computing Test Bed is aimed at creating a large, globally distributed testing environment that they hope will encourage unprecedented levels of research.
Companies from Amazon to Google, IBM and Salesforce.com already offer some cloud computing (define) services, but Prith Banerjee, senior vice president of research at HP and director of HP Labs, said a larger effort is needed.
"It requires an entirely new approach to the way we manage, and deploy cloud computing," he said in a conference call with reporters.
Banerjee also said the participation of HP Labs fit with the group's renewed commitment to focus more on projects that have a clear commercial payoff.
Other partners for the test bed include Infocomm Development Authority of Singapore (IDA), the University of Illinois at Urbana-Champaign, and Germany's Karlsruhe Institute of Technology (KIT). The partnership with the University of Illinois also includes a grant from the National Science Foundation.
The test bed will initially consist of six locations at IDA, the University of Illinois, the Steinbuch Centre for Computing at KIT, HP Labs, Intel Research and Yahoo.
The companies said each location will host a cloud computing infrastructure, with HP and Intel providing the hardware and processors. Each center will have 1,000 to 4,000 processor cores.
All six locations are expected to become fully operational later this year, when they'll also become available to researchers through a worldwide selection process.
"We really need to think of this beyond the physical hardware layer," said Prabhakar Raghavan, head of Yahoo Research. "It's about what exciting applications you can build once you take the cloud for granted. What if a utility was always available at any location or scale we want?"
Gartner analyst David Mitchell Smith said the research effort could potentially pay off by providing companies and organizations with ways to use cloud services to offload the huge cost of buying and maintaining datacenters.
"People are starting to imagine the cloud on that scale -- it's not pie in the sky anymore," Smith told Internetnews.com.
Intel (NASDAQ: INTC) Research Director Andrew Chien said the scale of project will help to expose what works and what needs more research. "It's one thing to do things at a test tube level, it's another to operate at a much larger scale with network effects and contention," he said.
Cloud computing research on a large scale is far from new. Last year, Google (NASDAQ: GOOG) and IBM (NYSE: IBM) also announced a joint project in conjunction with several universities. When asked why Intel and its partners didn't simply join that effort, Chien said he thought his group's effort is both complementary and different than the Google/IBM project.
"What we're trying to do is support research in a variety of levels and novel hardware features Intel has been able to add in silicon," he said. "We'll allow people to run a fairly low level of customized software."
Gartner's Smith said the reason Yahoo, Intel and HP are going their own way is simple: "They're competitors."
Additionally, he doesn't think a broader collaboration of competitors is likely in the near future.
"I wouldn't expect one unifying 'cloud' vision anytime soon," he said. "You'll see different definitions by different vendors."

Intel Defies Economy, Seasonal Weakness

Intel on Tuesday announced a 25 percent jump in profits over the second quarter of 2007 thanks to its aggressive cost cutting and continued strength of its products, despite U.S. economic problems and seasonal softness.
Normally, the second quarter is the weakest time of year in the hardware business, but Intel sales rose 9 percent over 2Q07 to $9.5 billion for its fourth record quarter of revenue in a row. The chip giant's aggressive program of cost cutting has paid off to the tune of a 67 percent improvement in operating income and a net profit of $1.6 billion, a 25 percent improvement over last year.
That translates to 28 cents per share, three cents better than analysts were expecting. Intel shares rose 21 cents in after-hours trading, after rising 24 cents to $20.71 during the regular trading session of the day.
During a conference call with financial analysts, CEO Paul Otellini was very upbeat on the quarter's numbers. "Demand continues to be strong, with revenue and unit shipments at high end of the norm when taking into account the divestiture of the NOR business," he said.
Intel is in the process of finalizing the spin-off of its NOR RAM business with STMicroelectronics.
The 45nm manufacturing process continues to ramp up, with the company expected to ship 100 million chips this year, Otellini said. They are getting better yields than at 65nm and expect 45nm to become the dominant manufacturing process this quarter.
Another segment growing faster than Intel expected: notebooks. Otellini said that notebook processor sales surpassed desktops in Q2. "That was sooner than we expected," he said.
The other surprise is Atom, its embedded chip. Atom sales are exceeding expectations and unit shipments will grow sharply in second half. The company is increasing production of Atom every 40 days as demand rolls up in netbook, embedded devices and consumer electronics.
Chief Financial Officer Stacy Smith said that microprocessor revenue was up 14 percent, with mobility accounting for one-third of total quarterly revenue and up 15 percent from 2Q07. All geographies experienced year-to-year growth, including the U.S., and were a little better than average on a seasonal basis.
"We are aware of global issues that dominate the market these days," he said. "We saw order patterns [grow] as anticipated. Inventories are healthy, our global footprint is benefiting from demand."
For the third quarter, Intel projects revenue between $10 and $10.6 billion, which would be two percent growth year over year at the midpoint of that range. Gross margin will be around 58 percent

Intel Shows Off New Centrino Notebook Chips

TAIPEI -- Intel, the world's top PC chipmaker, on Tuesday launched the next-generation of its Centrino wireless chip that it hopes will provide a new revenue stream amid a broader push into mobile technologies.
The launch of the Centrino 2 chip, previously codenamed "Montevina," came after a delay of several months and was decidedly lower key than the launch of the first Centrino chips in 2003.
The next-generation chipset combines Wi-Fi capability, which has a maximum range of only about 100 meters, with WiMAX, the newer wireless technology that allows for high-speed data transmission over much larger distances, and which can be used to blanket entire cities.
IDC analyst Bryan Ma said the new chips represent incremental development for the industry, compared with the first Centrino that marked Intel's (NASDAQ: INTC) entry to the wireless space.
"My big question is whether this is revolutionary or evolutionary; I suspect it may be more of the latter," Ma said. "Even if it's just evolutionary, however, it is still a good fuel to help the industry along."
The chips are mainly intended to go into notebook computers, as the PC industry moves to more mobile devices with new lighter technologies and development of new wireless networks.
Taiwan is in the process of constructing six WiMAX wireless networks, and is also the world's top contract PC producer, with sector leaders Quanta and Compal Electronics manufacturing for Hewlett-Packard (NYSE: HPQ), Dell (NASDAQ: DELL) and Sony (NYSE: SNE), among others.
"Notebook computers will be the main industry driver in the future, and notebook sales already outnumber desktop sales in many countries," said Stanley Huang, director of advanced technical sales and services for Intel Asia Pacific, at a launch event in Taipei.
"Because this chip has new capabilities, we hope it will change the way people think of mobile computing," he said.
Others at the launch included HP, the world's top PC maker, and Acer and Lenovo, the third- and fourth-place PC vendors.
All plan to use the chips in their models, with various players designing some 250 different notebook models with the chip.
The world's largest chipmaker is hoping to capitalize on the growing popularity of notebook PCs, which are rapidly taking over from older desktop models.
Data tracking firm IDC expects the notebook PC segment to grow 35 percent this year to 145 million units shipped, while desktops should grow much more slowly, by 2 percent, to 157 million. At current growth rates, IDC estimates annual notebook shipments will surpass desktops next year.
The Centrino 2 launch is part of a broader Intel strategy to develop a wider suite of wireless products to use in non-PC devices, most notably mobile phones, as data transmission speeds improve with new mobile technologies.
Such technologies allow for a much wider range of applications, such as streaming video and video downloads, that would have been impossible using older technology.
Paul Otellini, Intel's CEO, said in a speech last year that his company is seeking to spread its technology from the high-performance computing market to smaller products such as TV set-top boxes and handheld Internet-enabled devices.

Intel Introduces First System on a Chip

As promised, Intel has jumped into the System-on-Chip (SoC) market with a series of announcements based around both the Atom processor and a new family, built on the Pentium M chip.
The company has 15 SoC projects in its pipeline, aimed at large and small form factors alike, from handheld devices and mobile Internet devices (MIDs) to cars to servers, all of them considered new, growth markets, according to Gadi Singer, general manager of Intel's SoC enabling group.
Intel (NASDAQ: INTC) sees its greatest opportunities in the mobile Internet space and in emerging markets. Singer predicted the Internet will have 1.2 billion users by 2012 and people will expect Internet connectivity wherever they go, said Singer. But mature markets want their Internet, too, having grown up with constant access.
"New devices need to be powerful, responsive, but in a small form factor," he told a briefing of journalists. "This puts a lot of pressure on the silicon to support this."
The company's first SoC product is the EP80579, a four-on-one chip based on the Pentium M processor design with memory controller, I/O controller and a set of integrated application-specific accelerators on a single chip. Doug Davis, vice president of the Digital Enterprise Group and general manager of the Embedded and Communications group, said Intel used the Pentium M because that's what it had available at the time development began and it met its needs.
According to Davis, the chip will be 45 percent smaller than if the four chips were installed separately on a motherboard and consume 34 percent less power as a whole rather than as four separate pieces.
The EP80579 will consume 11 to 21 watts, depending on clock speed, and run between 600MHz and 1.2GHz. To satisfy industrial requirements, the company has announced a seven-year support life cycle.
Another chip, the EP80579, is for the embedded market. The next generation will be based on Atom instead of Pentium M, hardly a problem since Atom is also based on the Intel Architecture (IA) instruction set. The next generation of the MID processor line will be Lincroft, due next year, featuring a tenfold reduction in power consumption over the current line of chips.
For consumer electronics (CE), the company plans to release a chip code-named Canmore later this year, followed by Sodaville next year. Intel did not go into detail beyond saying that the chips would be tuned for consumer electronics devices, such as bringing the Internet to the television.
IDC analyst Shane Rau said this is a big change for Intel. "These are very highly integrated processors, he told InternetNews.com. "Intel is known for stand-alone processors," he said. "What's different about these is there's a system on a chip. So Intel is prepared to introduce pieces of silicon that are app-specific."
This reflects Intel's desire to look for every way possible to expand, and the way it expands is by finding new markets to enter. "It's the PC market looking to redeploy its technologies into new markets,

Intel Reveals First Details on Its GPU Entry

Intel has taken the wraps off its entry into the graphics processor market, offering up the first technical hints of "Larrabee," its attempt to take on nVidia and ATI/AMD in the highly competitive graphics processor space.
In a briefing with journalists ahead of its presentation at the Siggraph show later this month in Los Angeles, three engineers on the team went into great technical detail on the structure of the chip, but declined to give much product specification.
What they would say is that initially, Larrabee will be available as an add-in card, just like nVidia (NASDAQ: NVDA) and ATI (NYSE: AMD) cards are now. Other potential uses or the form factors of the cards were not discussed. Product won't appear on the market until late 2009, if not 2010.
Larrabee is built on the old Pentium technology, but heavily modified and modernized for graphics processing. Intel (NASDAQ: INTC) would not say how many cores would constitute a Larrabee processor, beyond the nebulous promise of "dozens." The cores all communicate through a wide "ring" bus that allows for fast inter-core communication and sharing of data, as well as sharing cache data. The L2 cache is partitioned among the cores, allowing for data replication and sharing.
Each Larrabee core is a complete x86 core capable of context switching and preemptive multitasking with support for virtual memory and page swapping. The main difference between Larrabee and other GPUs is that Larrabee will be much more flexible in the steps through what graphical data is processed.
Existing GPU architectures require data to be passed through a battery of processors, from a vertex shader to a pixel shader to a rasterizer, even if a particular processing job doesn't require it. As such, Intel believes that there is no such thing as a "typical" workload.
"The problem with designing a GPU is how much performance to put into the different segments to balance out variations in the load," said Larry Seiler, senior principal engineer in the visual computing group at Intel.
So Intel's solution is to not have fixed function stages in the pipeline. "You are entirely in control of how processing happens," said Seiler. "You can change scheduling, how each stage is handled, you can modify it to handle the characteristics of your workload, or change the rasterizer for your workload."
However, Jon Peddie, president of Jon Peddie Research, said Intel's solution is not necessarily better nor worse. "There are fixed functions and logical steps one has to go through in a GPU, but those things are in there for a very good reason. Those are the most efficient ways to do graphics programming," he told InternetNews.com.
What Intel is doing is adapting its strategy, the x86 architecture, to graphics, Peddie added. "This is really a multi core CPU. What makes it different from the x86 we are using in our computers is this ring communication for interprocessor communications. That is one of the main differentiators between Larrabee and Nehalem."
That said, Peddie thinks the "ring" for inter-core communication is a big revolution. "I think it's fantastic that Intel has done this, because this is the first innovation in computer graphics architecture since the GPU was introduced almost ten years ago. So they get a lot of credit from me for being brave enough to do it," he said.
The ring "gives you a really fat communication path for every processor to talk to every other processor. That's something they have that neither ATI nor nVidia have," said Peddie. "nVidia and ATI have an order of magnitude more processors, but are built in groups or gangs and communicate from group to group. So processor 004 can't talk directly to processor 794."
Seiler said that the flexibility of Larrabee is not limited to the hardware, but the software as well. "If a developer finds something in the API that limits them, they can create their own," he said. "We want to insure developers the freedom to run in Larrabee as they need."
That also means possible forking as developers improvise their own fixes, the same problem that made Unix so incompatible after many years of proprietary fixes. Intel is aware of that. "We want to give them freedom but we are wary of the potential for splintering. So it's a balancing act," said Seiler
Intel has been heavily romancing major computer graphics experts at universities all over the world and all of the major game developers. The paper being presented at Siggraph, along with many Intel engineers, includes Stanford engineers as contributors as well as Mike Abrash, one of the best known gaming graphics programmers.
So Intel is making a full court charge on Larrabee, a big change from its less-than-stellar integrated graphics products. "Don't judge Larrabee by Intel's current graphics products," said Peddie. "[Intel CEO] Paul Otellini has taken the handcuffs off the guys at Intel who know how to do graphics. Not only has he taken the cuffs off he's given them the checkbook to get some staff and IP behind it. As a result, Intel is going to do it right."

Apple's Suit Aims to Eclipse PsyStar

Apple has confirmed it's suing Doral, Florida-based startup PsyStar for selling computers using unauthorized copies of Apple's Leopard operating system.
"We take it very serious when we believe people have stolen our intellectual property," Apple spokesperson Susan Lundgren told InternetNews.com. Beyond that, Lundgren said Apple (NASDAQ: AAPL) had no other comment to make on the suit other than to confirm reports that Apple had taken the legal action against PsyStar last week.
Calls to PsyStar yesterday were not returned. Apple's suit was filed in U.S. District Court for the Northern District of California. Among Apple's complaints, is that PsyStar's use of Apple's proprietary software and intellectual property has "harmed consumers by selling to them a poor product that is advertised and promoted in a manner that falsely and unfairly implies an affiliation with Apple."
Apple co-founder and CEO Steve Jobs has never been a fan of licensing. When he returned to the company as CEO in 1997, one of his first acts was to shut down a licensing program that allowed other companies to build Mac-compatible computer systems powered by Apple's operating system.
PsyStar started selling a computer called the OpenMac back in May that included a modified version of Apple's operating system. As of today, the PsyStar Web site is still operating and taking orders. It touts the Open Computer as "The Smart Alternative to an Apple."
An FAQ on the site further states: "The idea of the Open Computers is not to pirate the Apple operating system but to allow the Apple operating system to be run on hardware of the user's choosing."
Among other products, PsyStar's cheapest is the Open Computer, a tower system with 2GB of memory, sans monitor, keyboard or mouse, that does include OS X Leopard pre-installed for $554.99.
The accompany product blurb says: "Why spend $1999 to get the least expensive Apple computer with a decent video card when you can pay less than a fourth of that for an equivalent sleek and small form-factor desktop with the same hardware."
Apple's cheapest Mac is the MacMini at $599, with less memory and graphics capability than the Open Computer.
PsyStar also sells downloaded updates for Leopard and servers using the operating system. Apple has asked the court to stop PsyStar from using Leopard and seeks unspecified damages, according to a report by the Bloomberg News service and numerous other news outlets.

Apple: Out of Touch With Server Room Needs

Here's a great idea to put to your CIO: Why not run the company using a server operating system made by Mattel? It's the company behind Barbie and Hot Wheels (not to mention Tumblin' Monkeys), so it certainly knows a thing or two about toys. Maybe its designers have enough time to put together an enterprise OS.
Yeah, right. The idea is plain ridiculous, but is it any more ridiculous than using Apple's OS X Server or letting end users work on Macs in the enterprise?
Because the truth is, Apple is not really a computer company. It makes toys. It used to be a computer company called Apple Computer, but it dropped the "Computer" bit from its name in January 2007 as a tacit admission that it was now a consumer gadget maker, not to mention an online music retailer. Following the introduction of the iPhone and iPod Touch, two very pretty "boy's toys," the company's latest caper is the launch of its App Store.
The top-selling applications as I write are Band, Crash Bandicoot and Super Monkey Ball, which sounds uncomfortably similar — in name at least — to the aforementioned and very wonderful Tumblin' Monkeys.
Perhaps I am being unfair. After all, Microsoft makes toys too, and plenty of enterprises run their businesses using its server OSes. Just under 40 percent of all server spending was on Windows-based servers in the first quarter of 2008, according to IDC's Worldwide Quarterly Server Tracker. Yet Microsoft also makes the Zune, for example, as well as the legendary Microsoft Barney.
So why shouldn't enterprises take Apple seriously? Here's the problem: It can't walk and chew gum at the same time.
Microsoft is huge, and it is quite capable of doing more than one thing at a time. During the past two years, it worked on Vista, Windows Server 2008, the Hyper-V virtualization system and the Zune — all at the very same time.
The same cannot be said for Apple. It can certainly make great toys like the iPhone and iPod Touch, and there's no doubt it can create OSes. But, as it revealed last year, it can't do both at the same time:
"... iPhone contains the most sophisticated software ever shipped on a mobile device, and finishing it on time has not come without a price — we had to borrow some key software engineering and QA resources from our Mac OS X team," Apple announced. "As a result we will not be able to release Leopard at our Worldwide Developers Conference in early June as planned."
That's scary stuff and not what you want to hear from an OS developer. The iPhone software development effort wasn't a one off, either: The company has clearly been putting plenty of resources into developing version 2.0 of its iPhone (and iPod Touch) software during the past few months. Firmware for its consumer toys, not software for its computers, is the priority at the moment.
It's ironic, then, that many commentators have been suggesting the new iPhone 2.0 software (which adds features like support for Microsoft Exchange ActiveSync to make the phone more attractive to businesses) will raise Apple's profile in the business market and thus lead to a gradual increase in the use of Macs in the enterprise. Also ironic is a new industry group consisting of Atempo, Centrify, Group Logic, LANrev and Parallels to form the Enterprise Desktop Alliance (EDA). This consortium of software developers is dedicated, according to the group's Web site, to "making it easy to deploy, integrate and manage Macs in a Windows environment."
Running Macs in the enterprise doesn't seem like a good idea if Apple hasn't got enough engineers to provide the kind of resources an enterprise desktop OS inevitably needs. For example, if enough corporate bigwigs insist on bringing their shiny new MacBook Airs onto the LAN, you can be sure malware writers will start to target them. How fast will Apple be able to respond with patches if it's too busy selling Super Monkey Ball?
Granted, Microsoft isn't always the fastest to respond to newly discovered exploits, but there's no doubt it has the manpower to put to the task when it really wants to. Likewise, there's certainly no shortage or Linux and Unix patch-writing experts willing to devote their time to producing security fixes for their OSes.
So where does that leave Apple's server OS? In the nine years since its launch, it's gone precisely nowhere, and with Unix server spending declining in the first part of the year, according to IDC, it doesn't exactly look like sales are going to explode any time soon. Where will Apple be devoting its attention over the next year or two then: Developing its server OS or making more toys for the boys (and girls)? That's a tough one. Not.
Lest you think I am just another mindless Apple basher, I'll proudly admit to having an iPod Touch, a black Classic, white fourth- and third-gen models, a silver Nano, and three Shuffles (one lost) in a variety of colors. But would I put an Apple Server in my business? Not a chance.
In addition to writing for ServerWatch, where this column first appeared, Paul Rubens is an IT consultant and journalist based in Marlow on Thames, England. He has been programming, tinkering and generally sitting in front of computer screens since his first encounter with a DEC PDP-11 in 1979.

Microsoft Serves Up SQL 2008

Microsoft today announced it's releasing to manufacturing SQL Server 2008, its enterprise database and business intelligence platform.
Originally planned for earlier this year to coincide with Windows Server 2008, SQL Server was delayed to insure the code was solid.
The release comes three years after SQL Server 2005, which is better than the five-year gap between it and SQL Server 2000. Microsoft (NASDAQ: MSFT) got an earful from customers and promised greater expedience with the release cycles.
"Customers clearly told us [the gap between SQL Server 2000 and SQL Server 2005] was too long," said Ted Kummert, corporate vice president of Microsoft's data platform and storage division. "We heard this and committed from there forward to a 24- to 36-month release cycle and said that that the next release of SQL Server would be available 24 to 36 months from the release of 2005."
Throughout that time, Microsoft offered a community technology preview (CTP), the fancy word for a beta, for customers to download and test. As new features were added, the CTP was updated. This helped not only gather feedback but seed the market so applications are ready on launch day.
Microsoft said that more than 75 large-scale applications are already in production, and more than 1,350 applications are being developed by nearly 1,000 independent software vendors (ISVs) on SQL Server 2008. "Our overall objective was to get a lot of customer feedback and get a lot of apps into production," Kummert said.
Microsoft highlighted a number of big changes: a streamlined upgrade path, integration with services by Oracle and SAP, improved Office 2007 performance, transparent data encryption and a system resource governor.
SQL Server is available in seven different editions, from the free Express version that runs on a desktop computer to the Enterprise edition for mission-critical computing. Pricing remains the same for 2008 editions as it was for the 2005 editions.
One of those editions is new. SQL Server 2008 Web is an Internet-designed database meant specifically for highly available Web applications or hosting environments, according to Microsoft. Hosting partners wanted better features, scalability and pricing for a Web version of SQL, so Microsoft created this version to fit these needs.
Existing customers of SQL Server 2005 will find the pricing most agreeable: free, if they have a Software Assurance support license (and who doesn't?). It is available for a free download from Microsoft's TechNet site. An evaluation copy is planned for release on Thursday.
Chris Aliegro, lead analyst with Directions on Microsoft, said this release of SQL Server isn't quite as monumental a jump as the prior release, but remains important, as SQL Server has become a big business at the company.
"This has clearly become a rock star product for Microsoft," he told InternetNews.com. "It's gone from being interesting to strategic and a billion dollar-plus product, and it's used so widely by other products at Microsoft."
The Web version of the product seems like something aimed at MySQL, which is very popular with Web developers. Aliegro would not say yes or no on that, but added he wouldn't be surprised if that was where Microsoft was aiming.
"Clearly there's a great market opportunity for them to do something like that," he said. "SQL Server has been historically used behind the corporate firewall, not to support Web sites," Aliegro explained. "If there's an opportunity for them to move a product into a lucrative market, they are going to do it, and database-backing Web sites is an opportunity for them."

Open source blades?

Rackable also announced it's joining Blade.org, the industry consortium created in 2004 by IBM and Intel (NASDAQ: INTC) that's trying to create open standards for blade servers. Up to now, all of the blades from Hewlett-Packard, Dell and Sun have used their own design standards, meaning you can't install a Sun blade in an HP chassis, for example.
IBM has been an exception to that rule, opening its blades up and making the specs public. The first company to offer a third-party product is Themis, which offers a Sun UltraSPARC-based blade that works in a BladeCenter chassis.
"Openness has been a key part of our strategy for blades from day one," said Tim Dougherty, director of BladeCenter strategy at IBM. "We felt it's the right thing to do to expand the blade market."
The ICE Cube is available in 20- or 40-foot container sizes and can hold up to 1,344 dual-socket blades with quad-core Intel Xeons or 672 quad socket, dual-core AMD Opteron blades.
The BladeCenter chassis slots right into a Rackable ICE Cube, so long as you remember to remove the wheels that are on the chassis. It then uses the ICE Cube power and cooling system for operation and management.
IBM BladeCenter T and HT are available immediately via Rackable Systems and its channel partners

IBM, Rackable Team Up on Blades

IBM and Rackable Systems today announced a joint sales agreement whereby Rackable Systems will offer IBM's BladeCenter systems for installation inside of Rackable's ICE Cube modular datacenters.
The ICE Cube is a standard-size shipping container with all of the racks for an ultradense datacenter installed inside. Customers can fill it with Rackable's line of 1U to 9U racks or they can add a BladeCenter chassis to use IBM blades.
The deal involves the use of IBM BladeCenter T or HT systems, which are NEBS-3/ETSI-compliant, meaning they’re certified for use in telecommunications environments and carrier facilities.
Pund-IT Principle Analyst Charles King said it's a win for both companies. "It gives Rackable a way to jump into the blade solution space without having to spend any money developing it themselves," he said, adding that developing the blades is an expensive process. "And it allows IBM to sell BladeCenter as an OEM product -- not that Rackable will put its logo on it."
Tony Carrozza, senior vice president of sales and marketing for Rackable, said adding the BladeCenter products to the its modular datacenter "puts us in a position to offer customers a broader set of solutions, from cloud computing to enterprise apps, which is what the BladeCenter allows us to do going forward," said
Blades from IBM (NYSE: IBM) are inherently different from a Rackable (NASDAQ: RACK) rack-mounted system in that IBM blades are for more general-purpose computing, whereas Rackable systems are meant for ultradense computing projects.
"We don't have what you would consider to be a true blade product," Carrozza told InternetNews.com. "The BladeCenter is a true blade product with a self-contained fabric, management software, redundancy and resiliency in it," he explained, adding that besides rack servers Rackable typically offers scale-out servers. "It's blade like but doesn't have its own switching fabric built into the chassis," he said.
Plus, Rackable's systems are based on Intel and AMD chips, whereas IBM blades also offer the option of IBM's POWER5 and POWER6 processors.
The ICE Cube is available in 20- or 40-foot container sizes and can hold up to 1,344 dual-socket blades with quad-core Intel Xeons or 672 quad socket, dual-core AMD Opteron blades.
The BladeCenter chassis slots right into a Rackable ICE Cube, so long as you remember to remove the wheels that are on the chassis. It then uses the ICE Cube power and cooling system for operation and management.
IBM BladeCenter T and HT are available immediately via Rackable Systems and its channel partners.

IBM Follows Through on Data Protection for SMBs

Just three months since it acquired continuous data protection (CDP) startup FilesX, IBM has rebranded its solution as IBM Tivoli Storage Manager (TSM) FastBack and today began pushing its own version to market.
CDP, also known as continuous backup, has been growing in importance as a way for companies to better ensure their files are safeguarded. The process automates file backups each time a change is made, enabling offices to not only restore lost files, but also to reconstruct files as they appeared at any point in the past.
In particular, such offerings are lower-cost offerings aimed at helping small-to-midsized businesses (SMBs) and remote locations -- which typically lack dedicated IT staff -- deal with mounting storage concerns.
"There is more data being distributed and companies want consistent data protection strategies," John Connor, product manager for TSM FastBack
Big Blue said the ease of managing FastBack's disk-based, block-level storage approach makes it suitable for small and remote offices that have little tech support.
"This eliminates issues with backup windows and provides near-instant recovery," Connor said.
FastBack, which IBM acquired earlier this year, also represents a second try on the technology, Lauren Whitehouse, an analyst with Enterprise Strategy Group, told InternetNews.com.
IBM initially had attempted to tweak its TSM top-end product to create a data protection tool tailored for small to midsized firms, called TSM Express. But it wasn't a perfect approach, Whitehouse said.
"They scaled down the comprehensive offering but the TSM Express really didn't meet the need as it was still hefty to manage," she added. Instead, the new FastBack offering "is the next generation of that approach."
Still, for Big Blue, being able to swiftly integrate the product and assimilate it into its sales channel shows its proficiency at technology mergers, Whitehouse said.
"They have the formula for taking in technology they acquire and getting it rolled out in an impressive time frame," she added.
The new offering serves to build on IBM's storage flagship, the venerable Tivoli Storage Manager. IBM also said the new FastBack offering, which has been melded into TSM, complements its CDP software for laptops and desktops, IBM Tivoli Continuous Data Protection for Files.
Along with FastBack, IBM also introduced FastBack for Microsoft Exchange and FastBack for Bare Machine Recovery, designed for system and server migrations. Together, the bundled components are sold as TSM FastBack Center.
While the suite is currently offered only for Windows platform, IBM has plans to develop additional versions, it said.

IBM Thinking Green for N.C. Datacenter

IBM today announced that it will build an advanced datacenter in its Research Triangle Park, Raleigh, N.C., facility for $360 million. This will be the first datacenter to be built with the computer giant's New Enterprise Datacenter design principles.
The New Enterprise Datacenter platform is a fusion of Google's Webcentric cloud approach and the MySpace approach, with an emphasis on data-intensive parallel programming.
The datacenter will be one part of a hub for IBM's (NYSE: IBM) computing infrastructure in the cloud that clients will be able to access anytime from anywhere. The other part of the hub is IBM's datacenter in Tokyo, which is also being unveiled today.
Data-intensive parallel computing "turned out to be a very key element of cloud computing applications we've seen deployed by Facebook, Yahoo (NASDAQ: YHOO), Google (NASDAQ: GOOG) and some folks in the telecommunications industry we've had discussions with," Dennis Quan, director of development, autonomic computing, with the IBM software group, told InternetNews.com.

The technologies for both datacenters were shaped by work done through IBM's partnership with Google. The two teamed up in October to create three datacenters for academic computing using data-intensive parallel programming.
IBM's work with Google "gave us a lot of knowledge and research to figure out the best way to bring out the cloud computing platform," Quan explained. "The manifestation of that is our Unified Datacenter Architecture."
According to Quan, this architecture consists of IBM's systems technologies -- virtualized networks, storage, compute resources using Xen and VMware on x86 boxes and native virtualization capabilities on power systems and the mainframe.
It also includes using Tivoli software "to drive provisioning and monitoring of systems in the datacenter, and advanced capabilities like doing chargeback and ensuring high levels of availability and storage management," Quan added.
Befriending the environment
The Raleigh, N.C., datacenter will leverage green computing principles. It will use many of the technologies from IBM's Project Big Green initiative to slash energy consumption, and use the latest water-cooled and air-cooled equipment.
IBM will make heavy use of virtualization technology in the datacenter. The company plans to adopt a modular construction technology to minimize cost and environmental impact.
After the first 60,000 feet are built, additional space will be added in equal-size modules on demand. The center will be able to support 2.5 to three times as many clients as a traditional datacenter of comparable size, IBM said.
Work on the Raleigh, N.C., datacenter will begin soon, and it's expected to be completed by next year. It will be "the biggest of the cloud centers we've launched," Quan said.
Since March, IBM has launched cloud computing centers in Dublin, Ireland; Beijing, China; and Johannesburg, South Africa. The Tokyo and Raleigh, N.C., datacenters will be the vendor's eighth and ninth datacenters, respectively.
Over time, Quan expects enterprises to increasingly leverage cloud computing at its datacenters. "We're going to see a lot of apps come about based on SOA (define) that will allow applications on various clouds to talk to each other and be very functional and be managed in a centralized fashion," he said. SOA is service-oriented architecture.
Charles King, principal analyst at Pund-IT, said IBM is making "a very large investment in what's still a very interesting, but from a commercial standpoint, an emerging technology" with its move into cloud computing. However, the datacenters "point to IBM's strength in datacenter design and hardware strength."
King contrasted IBM's moves to the joint initiative announced recently by Hewlett-Packard (NYSE: HP), Intel (NASDAQ: INTC) and Yahoo to set up cloud computing labs. "They're working on research testbeds for cloud computing, whereas IBM is moving forward with what will be commercial datacenters,


Other cloud computing initiatives
HP has another cloud computing initiative it launched in May. This will help enterprises build their own minicloud datacenters, complete with products and services for building out the center.
Cloud computing is fast becoming a growth area, and companies are flocking to this sector thick and fast.
In February, storage giant EMC bought Seattle-based startup Pi, using it as the nexus of its newly set-up Cloud Infrastructure and Services Division. Pi founder Paul Maritz, a 14-year veteran of Microsoft (NASDAQ: MSFT), became head of that division.
Maritz is now CEO of virtualization giant VMware, an EMC (NYSE: EMC) company, and sits on its board of directors. He took the post after VMware co-founder and CEO Diane Greene left abruptly.
In June, Q-layer announced the concept of a virtual private datacenter. This would let companies support on-demand cloud computing through virtual datacenters, according to a report by Enterprise Strategy Group analyst Mark Bowker. It would also let them build their own virtual environments, using a credit-based chargeback system to track resource allocation and utilization.
Also in June, Red Hat (NYSE: RHT) made its JBoss Enterprise Platform available on Amazon's (NASDAQ: AMZN) EC2. This lets users build, deploy and host enterprise Java applications and services in the cloud.
And earlier this month, RightScale and GigaSpaces teamed up to let enterprises deploy and scale data- and transaction-intensive applications on EC2, and manage them. And 3Tera lets customers run applications in their own datacenter.

IBM Unveils Proactive E-Discovery Solution

IBM has unveiled eDiscovery Manager, a software product that lets enterprises manage electronically stored information so that they can retrieve it easily when a legal challenge requires e-discovery. The term e-discovery stands for 'electronic discovery' and refers to discovery in civil litigation dealing with information in electronic form.
Enterprises are being forced to adopt e-discovery solutions because of amendments in December to the Federal Rules of Civil Procedure (FRCP) (define) that require companies to preserve and produce electronically stored information when facing litigation. FRCP is the U.S. federal district court procedures for civil suits.
Part of the vendor's Enterprise Content Management (ECM) (define) suite of products, eDiscovery Manager integrates with IBM's auto-classification and records management technology, and the vendor's content-centric business process management (BPM) (define) capabilities.
The eDiscovery Manager uses IBM's e-mail archiving solutions, leverages the vendor's ECM repositories, and supports an easy-to-use interface. It "controls information at its source when it is created," Aaron Brown, program director of IBM Content Discovery told InternetNews.com. The eDiscovery Manager works with IBM's Classification Module, and content management repository for this.
"We let you handle capture, retention, archiving and content management to manage your content proactively because the information has already been retained, classified and managed," Brown said. Many other e-discovery vendors offer point solutions or outsourced third-party solutions that companies put in place when litigation comes in, according to Brown.
While these point and third-party outsourced approaches do work, "they are expensive, because organizations often have to expand them to deal with other cases, and typically these solutions don't address the bigger problem of information retention," Brown said. "We want e-discovery to be part of the basic daily process rather than having you run around putting out fires."
IBM (NYSE: IBM) is one of the few vendors that can offer an enterprise-class solution for eDiscovery, Rob Enderle, principal analyst at the Enderle Group, told InternetNews.com "Nobody has the breadth anymore to cover all databases, all repositories; it requires a very large set of capabilities and IBM Global Services and EDS are among the few entities that folks love to help solve that kind of complex problem."
EDS is now a part of Hewlett-Packard (NYSE: HPQ), which bought in May for $13.9 billion.
Together with other products in IBM's ECM suite, eDiscovery Manager lets enterprises sort, classify and archive information for easy retrieval.


The inner workings
IBM eDiscovery Manager is an integral part of the IBM Compliance Warehouse for Legal Control, a combination of software, hardware and services that let enterprises achieve, sustain and prove compliance with multiple legal and compliance mandates
It also supports IBM's broader Information Governance strategy, which helps clients define, enforce and monitor policies related to the control and quality of information. IBM eDiscovery Manager runs on Windows and AIX, IBM's version of Unix. IBM is moving AIX to open source.
When lawyers need information relating to a case, they provide the parameters such as the subject, a range of dates, and keywords to IT, which conducts a search using eDiscovery Manager's search-based interface. The product collects relevant information and puts a litigation hold on it, which marks that information as undeletable.
"A very important part of the eDiscovery process is that the tools you use must respect things like deletion policies and keep accurate logs of access and searches," Brown said. These functions have been built into eDiscovery Manager, he added.
Once the search is complete, the IT department checks the information gathered and exports the result to the legal team, Brown said. "This is not the end application for the legal team, we're just focused on proactive management and collecting and identifying the information

Ruiz Steps Down as AMD CEO, Meyer Ascends

After seven quarters of bleeding money, Advanced Micro Devices will have a new CEO.
Hector Ruiz has stepped down as chief executive of the troubled microprocessor company, with Dirk Meyer, president and chief operating officer, taking over the reigns.
The news was announced on a conference call to discuss AMD's (NASDAQ: AMD) second-quarter financials, which were not pretty. The company reported a loss of $1.189 billion, or $1.96 per share, on sales of $1.35 billion.
"The time is right," Ruiz said today during the company's earnings call. "Barcelona is shipping, the conversion to 45 nanometer is on track," he added, referring to the much-delayed launch of AMD's quad-core Opterons and the company's upcoming smaller chip designs.
He also said the company has made progress on its efforts to streamline its business operations in connection with foundry partners, a plan it calls "Asset Smart".
"This is why the time is right to turn the company over to a new leader, one who has earned the trust of AMD partners and customers worldwide," he said.
Ruiz, 62, joined AMD as president and chief operating officer in January 2000 and became AMD's chief executive officer on April 25, 2002. He has served on AMD's board of directors since 2000 and was appointed chairman of the board of directors in 2004.
Meyer, 46, joined AMD in 1995 as part of the design team for the original AMD Athlon processor. He worked his way up to president and COO in 2006. He holds more than 40 patents as an engineer.
Ruiz will stay on as executive chairman and chairman of the board, working on the asset smart strategy and continuing the battle against Intel by "breaking our industry from the grip of an illegal monopoly," as he put it.

EU Claims Intel Paid Retailers to Drop AMD

BRUSSELS -- European Union antitrust regulators made new accusations against chipmaker Intel on Thursday, saying it paid retailers to not sell PCs using chips made by rival AMD.
The "statement of objections" from the European Commission follows 2007 charges against Intel (NASDAQ: INTC) that claimed the world's biggest microchip producer gave computer makers rebates to limit their use of AMD (NYSE: AMD) chips or avoid them altogether.
The expansion of the accusation means the Commission is now weighing charges that Intel illegally fiddled with both the wholesale and retail channels in an effort to suppress its competitor.
"The Commission also considers at this stage of its analysis that all the types of conduct reinforce each other and are part of a single overall anticompetitive strategy aimed at excluding AMD or limiting its access to the market," the EU's executive body said in a statement. A Commission statement said Intel had provided substantial rebates to a leading European personal computer retailer, conditioned on it selling only Intel-based PCs.
Secondly, the Commission said Intel paid a PC maker to delay the planned launch of a product line incorporating an AMD-based CPU.
Thirdly it gave the same computer maker substantial rebates to encourage it to get all its CPUs for laptops from Intel, said the statement.
Intel has its logo on four-fifths of the central processing units that run the world's 1 billion personal computers, while AMD accounts for the rest.
The Commission could fine Intel, though any penalty would be unlikely to approach a cap of 10 percent of annual revenue. It could possibly damage the firm's reputation by labeling the company an unfair competitor.
Intel has eight days to reply to the charges.
The EU executive in mid-2007 publicly alleged three kinds of violations by Intel, including providing CPU chips to strategic customers such as governments and educational institutions below cost.
Intel has said repeatedly that it did nothing wrong.
In June, South Korean authorities fined Intel about $26 million, finding it had offered rebates to South Korean PC makers including Samsung Electronics and Trigem Computer in return for not buying AMD microprocessors.
One day after the South Korean finding, the U.S. Federal Trade Commission launched its own formal probe.
The state of New York is also investigating Intel.
In Japan, the Fair Trade Commission concluded in 2005 that Intel had violated that country's Antimonopoly Act. Intel disagreed with the findings but accepted the commission's recommendation, a move that allowed it to avoid a trial.

AMD Loses $1.18B and Its CEO

Seven is usually considered a lucky number, but not for Hector Ruiz. After seven quarters of red ink, the AMD CEO has resigned his post, turning it over to President and Chief Operating Officer Dirk Meyer, who had long been acknowledged as Hector's successor-in-waiting.
Ruiz will stay on as executive chairman and chairman of the board, working on the company's mysterious "asset smart" strategy and continuing the battle against Intel by "breaking our industry from the grip of an illegal monopoly," as he put it.
"The time is right," Ruiz told a conference call of financial analysts. "Barcelona is shipping, the conversion to 45 nanometer is on track. We've made progress on asset smart. This is why the time is right to turn the company over to a new leader, one who has earned the trust of AMD partners and customers worldwide."
Ruiz, 62, joined AMD as president and chief operating officer in January 2000 and became AMD's chief executive officer on April 25, 2002. He has served on AMD’s board of directors since 2000 and was appointed chairman of the board of directors in 2004.
Meyer, 46, joined AMD in 1995 as part of the design team for the original AMD Athlon processor. He worked his way up to president and COO in 2006. He holds more than 40 patents as an engineer. An AMD spokesman said there was no immediate plans to replace Meyer as COO and would stick with the existing executive team for now.
Right now, patents are not what AMD needs, it needs to make some money. The company just reported a second quarter net loss of $1.19 billion, or $1.96 per share, compared with a net loss of $600 million, or $1.09 per share, in the second quarter of 2007. Revenue rose to $1.35 billion from $1.31 billion a year ago.
The lion's share off the loss was a $876 million loss in write-offs from the ATI, which was two years ago. AMD has taken several depreciation charges for ATI over the past few years. These write-downs are simply a realignment of the balance sheet, saying that the company has less value on paper than it thought it did. No money actually goes out the door.
But in this quarter, the company was cash flow negative. It posted an operating loss of $143 million, even though its gross margin was 52 percent and expenses were in line with prior quarters. The loss could be attributed in part to the seasonally weak quarter and, as Chief Financial Officer Bob Rivet put it, "the challenges of the consumer macro economic climate."
The company will also take charges for the next few quarters as it ramps up to a 45 nanometer manufacturing process, something Intel is several quarters ahead on. Rivet said the company would take charges over the next few quarters as it brings its fabrication plants up to speed.

AMD Pushes 'Cinema 2.0' With Superfast Chip

SAN FRANCISCO – Hail, hail the GPU. At least that seems to be rallying cry of competitors ATI (owned by AMD) and nVidia, two of the leading purveyors of graphics processors. At a media event here today, AMD previewed its next generation Radeon 4000 chip capable of up to a teraflop performance.
"This chip can execute a trillion floating point operations per second or the same as a teraflop computer. This is a phenomenal leap for a chip that only measures a centimeter on a side," said Rick Bergman, senior vice president of AMD's graphics products group.
He said the Radeon 4000 will be available next week in PC add-in cards, branded Radeon HD, for about $200. Desktop and notebook computer makers will eventually incorporate the chip as well.
Want better gaming and graphics? Bergman claimed the Radeon 4000 offers more performance that all previous game consoles combined. He also said it has 100 times the compute capability of the IBM Deep Blue supercomputer that beat world chess champion Garry Kasparov back in 1997.
AMD (NYSE: AMD) spent the better part of a 90-minute presentation describing its vision of "Cinema 2.0," an ecosystem of developers and companies that hold the promise of delivering movie-quality realism with the interactivity of today's most popular videogames. Some demos included video games with near lifelike animations in science fiction-themed scenarios.
"Experts claim we are about seven years away from getting to the visual quality of movies" on the desktop, said Bergman. "We believe those estimates are off by seven years."
"Sin City" director Robert Rodriguez said of the Cinema 2.0 effort, "The industry's been dying for this." In pre-taped comments, Rodriguez said moviegoers are no longer satisfied with passive observation. "They want to be part of the process …. It's almost like a Roman mob, they want more."
He said movie studios are excited by the potential of this new generation of graphics technology which will enable more realistic, interactive video games to be released the same time or shortly thereafter the theatrical release.
The core of the issue
Analyst Jon Peddie said AMD is in a horse race with nVidia, which announced a new family of GeForce GTX 200 graphics processors today, slated for availability starting tomorrow. nVidia said the new chips deliver fifty percent more gaming performance over the company's previous GeForce 8800 Ultra GPU through a whopping 240 enhanced processor cores that can generate resolutions as high as 2560 x 1600.
AMD did not make all the technical specs of the Radeon 4000 available in advance of next week's release, though one source briefed said it will offer twice the number of cores of nVidia.


Both companies are taking a very different approach to advanced graphics than Intel (NASDAQ:INTC). At its Research Day event last week, Intel's CTO Justin Rattner said that long term Intel views the traditional approach of raster graphics driven the GPU as "problematic."
He said Intel thinks a new architecture based on aggressive 'many core' processors "will deliver a vastly better visual chip."
Rattner also said the first example will be Intel's forthcoming Larrabee architecture a many-core design for visual computing he said will be previewed at the Siggraph conference in August.
"ATI (AMD) and nVidia are the leaders in raster graphics and they see Intel's view as ridiculous," Jon Peddie, principal analyst at Jon Peddie Research, told InternetNews.com. "Intel wants to push the CPU and x86 processors as the solution."
Peddie notes that the Larrabee, due out next year, will have 24 to 48 cores, a fraction of the number offered by dedicated graphics processors from AMD and nVidia. "You can't expect a general-purpose processor to compete with a specialized one, it's absurd," said Peddie.
Still, he conceded, "Intel is the 800 pound gorilla" in the chip market and they will get a share of the graphics market "just for showing up."
Last week, Rattner noted Intel "has nothing against GPUs. We probably build more of them than anyone else." But he suggested Larrabee and other many-core processors are the chip giant's longer term direction.

Trial for Intel-AMD Dispute Delayed to 2010

The trial date in a long-running legal battle between chip giant Intel (NASDAQ: INTC) and smaller rival Advanced Micro Devices (NYSE: AMD) was delayed until 2010, both companies said on Thursday.
In the lawsuit originally filed in 2005, AMD accused Intel of giving computer makers illegal discounts and retaliating against manufacturers who used AMD chips or stores that gave significant shelf space to computers with AMD chips.
Intel has denied any wrongdoing.
The two sides will split 250 days to depose witnesses, with AMD getting slightly more than Intel, said AMD attorney Chuck Diamond and Intel spokesman Chuck Mulloy, both of whom were at a hearing at the U.S. District Court for the District of Delaware in Wilmington.
The trial had been set for April 2009 but was pushed back to Feb. 20, 2010, Diamond said.
AMD had asked the court for 486 depositions in hopes of proving that Intel broke the law in competing with AMD. Intel sought to limit each company to 75 depositions.
Intel will also be required to produce Edward Ho, an Intel employee in China, for a deposition, said Diamond and Mulloy by telephone. AMD hopes that Ho's testimony will help them prove their case.
Also on Thursday, the Korea Fair Trade Commission in Seoul said Intel abused its dominant position in the local market and ordered a fine of $25.6 million. Intel said it would almost certainly appeal.
The U.S. Federal Trade Commission has an informal probe under way into whether Intel abused its dominant position, while the New York state attorney general opened a formal probe in January.
Last July, the European Commission in Brussels charged Intel with selling chips below cost and offering customers huge rebates in an illegal attempt to drive AMD out of the market.
In Japan, the Fair Trade Commission concluded in 2005 that Intel violated the antimonopoly act. Intel disagreed with the findings but accepted the commission's recommendation, a move that avoided a trial.

AMD Finally Offers a Notebook Platform

AMD is kicking off the Computex show with the announcement of Puma, its first platform for the mobile computing market designed to counter Intel's successful Centrino platform.
Puma combines a new AMD Turion X2 dual-core mobile processor with ATI Radeon HD 3000 graphics and an ATI chipset. This new platform already has a number of OEM design wins, including Acer, Fujitsu, Fujitsu Siemens Computers, MSI, NEC and Toshiba.
The effort marks AMD's first unified platform of CPU, graphics and chipset. It's also the first time AMD (NYSE: AMD) has a real laptop story to tell.
Until now, the company has just taken its desktop parts and tuned them down a little for laptops. Using that approach, AMD has done fairly well in the mobile market. But now it actually has a mobile-specific platform, which can only help against Centrino, one analyst said.
"With this new Puma platform, they have, for the first time, actually tried to think about what a chip inside notebook needs versus what one inside the desktop needs, and they've changed the design in many ways to save power without compromising performance in this mobile environment," said Nathan Brookwood, research fellow with Insight64.
The Puma platform is launching with 2.4GHz and 2.1GHz dual-core processors, which include mobile-specific enhancements like core power management for idle states and battery management; and AMD's M780G and SB700 chipset, which features the ATI Radeon HD 3200 graphics. The graphics processor was top of the line two years ago and delivers DirectX 10 support for Vista and 1080p HD video playback.
AMD has no plans for quad-core notebooks for now, nor is it interested in pumping up the CPU speed.
"The processor has become a less-important criteria in the purchase of a notebook compared to memory, the screen and graphics performance," Bahr Mahony, director of AMD's mobile division,
However, it won't be the CPU keeping AMD from targeting the enterprise with the Puma. Instead, that decision is due to battery life: Puma will offer between 4.5 to five hours of battery life, but OEMs want five to six hours for business customers, Mahony said.
Still, battery life is a chief factor in many of Puma's elements. The design's 780 chipset has a feature called PowerXpress that allows the notebook to switch between integrated and discrete graphics on the notebook.
As a result, OEMs can build the notebook with a discrete graphics processor -- either AMD's or nVidia's -- and then the user can switch between the integrated and discrete graphics depending on the performance or power savings they want.
Switching from discrete to integrated graphics can mean 90 more minutes of battery life, Mahony said.
Along with becoming AMD's first unified CPU, graphics and chipset platform, Puma also assists the company in planning for the future, Mahony said.
"We found the platform to be a significant milestone for us," he said. "We were able to achieve balance throughout the platform and that experience will be vital for the next platform, Fusion."
Brookwood said it does help AMD catch up to Intel in the area of platform integration, since Intel has had something similar for a while. It offers CPU and chipset integration with a GPU, either AMD's or nVidia's, on its laptop platform. Still, he felt Puma is a big help for AMD.
"They won't take the top prize in terms of either computation performance or battery life, but they certainly narrow the gap with what Intel is offering," Brookwood said. "From a purely tactical situation, AMD is on track to deliver these products now, while Intel is having trouble with the schedule for their Centrino 2 platforms."
Intel had planned to introduce Centrino 2, developed under the codename "Montevina," at this week's Computex show in Taiwan, but has delayed its launch until July 14 to work out some certification issues.

AMD Moves to Woo Overclockers

A CPU's clock speed -- the rate at which it executes instructions -- combines the system bus speed (define) with a multiplier that signifies the number of cycles executed each cycle. A 2.4GHz processor, for example, might run at 200MHz with a multiplier of 12, meaning that 12 clock cycles are executed at 200MHz for every cycle of the CPU.
By tweaking the clock, multiplier and system power, overclockers hope to get more power out of their system than it's supposed to offer. The downside to any tweaking, however, is that pushing components beyond their rated limits can make a system unstable -- but enthusiasts feel it's worth the risks.
"They feel like they are getting something for free. That's why they do it," Adam Kozak, chipset product marketing manager for AMD, told InternetNews.com.
As a result, don't expect to see the 790GX in business computers like Dell's Vostro line or Lenovo ThinkPads. It's purely for the hobbyist market, which is often willing to spend extra to squeeze out additional performance from their systems -- a fact that could help AMD (NYSE: AMD) in its sweeping effort to right itself.
Neither AMD nor Intel (NASDAQ: INTC) have tried to stop overclocking of their processors, but neither has officially condoned it, either. Instead, it's often a source of great amusement to both to see how far hobbyists will go in risking turning their CPU into a charcoal briquette. In AMD's case, it decided to make the process a little safer.
"This is what gamers like to do," Kozak said. "We can either ignore it and have an underground scenario where things are happening we can't control, or we can offer the right tools and not control it but at least say, 'Here's how to do it,' and we can market it as a gamer-friendly type of product."
The 790GX chipset is actually the product of ATI Technologies, which AMD bought in 2006. Having the two firms integrated was a huge help in getting access to technical information, Kozak said, making an overclocking-friendly chipset possible.
The AMD 790GX, designed for its Phenom processors, features what the chipmaker calls Advanced Clock Calibration for tweaking the settings on the CPU. The 790GX not only allows for enhanced performance, but also a computer can be underclocked to drop it into a lower-power state to cut down on energy consumption and heat.
The Catalyst software that comes with an ATI chipset also will have built-in controls for dropping the computer into a low-power state, and will integrate into Windows Vista's power management controls, Kozak said.
The 790GX also supports ATI's Hybrid Graphics Technology, which allows both discrete and integrated graphics at the same time. This is meant for the laptop market, since there is a marked difference in power consumption between integrated and discrete graphics.
When operating off the battery, a laptop can use integrated graphics, which aren't as powerful but don't draw as much power. Once it's plugged into a power outlet, the laptop can switch to discrete graphics for high-end graphics.
The AMD 790GX chipset also offers 1080p HDTV video and support for advanced video codecs (define) like VC-1, MPEG-2 and H.264.
Motherboards featuring the 790GX are expected to be available today from major motherboard vendors like Asus, Gigabyte and MSI.

AMD's New CEO and Execution

The ascension of Dirk Meyer to CEO of Advanced Micro Devices (AMD) is not entirely unexpected. He was widely viewed as the successor to Hector Ruiz, who stepped down yesterday.
The 62-year-old Ruiz came to AMD in 2000 after 22 years with Motorola, where he headed the company's semiconductor unit. In his eight years with AMD (NYSE: AMD), Ruiz saw the company go from being an also-ran to Intel, to a serious threat and technology innovator, then to falling from grace after a string of seven quarters in the red.
Intel (NASDAQ: INTC) is a company that prides itself on execution, and if you're going to take on a company like that, you have to execute, or be executed.
On the conference call yesterday when the executive change was announced, Meyer, 46, said the plan was to "execute, execute, execute" and was critical of prior quarters.
Jim McGregor, senior analyst with In-Stat, noted that Meyer has been copilot during that whole period that he criticized. "My feeling is isn't that the same thing that's been going on for the past year? So is this just a shell game of titles?" he said.
"It comes down to you have to have a vision. Where is this company going to play and what is our strength?" McGregor added "And he's got to put his job on the line. He's got to say we're going to be profitable by a certain point, say it, do it mean it."
AMD spokesman Drew Prairie pointed to Meyer's previous stint as head of the semiconductor unit from 2001 to 2006 as an example of his prior work.
"If you look at where that business was in 2001, it was similar to where the company is now," he told InternetNews.com. "It was not performing well. That was a business that needed to be focused and turned around and he did. Revenues doubled under his leadership."
AMD caught Intel flat-footed on three notable occasions: it was the first to introduce 64-bit x86, which Intel dismissed for a while until it saw what a hit the processor was becoming for AMD; it was first to market with a dual core processor, and first to put the memory controller on the CPU while Intel stuck with the front side bus.
However, the last two years have not been that good to AMD. Barcelona, its native quad-core server processor, was months late and under-delivered in performance when it first shipped. OEMs remained patient with AMD more out of desire to see someone provide Intel with healthy competition than because they believed in AMD.
"They can't afford to abandon AMD, and rightfully so," McGregor said. "The AMD 64 architecture has a good place in the market, especially on multiprocessor servers. So the market would love to see a strong competitor. I think even Intel would love a strong competitor. That helps Paul [Otellini, Intel's CEO] push a lot of change he wants throughout Intel. But it can't come down to leniency by the market."
Just as Bo knew baseball, Meyer knows chips. He was an engineer with Digital Equipment Corp. and was co-architect of two Alpha processors, the first 64-bit processor on the market. He joined AMD in 1995 and led the Athlon team, which made the first 64-bit x86 processor.
AMD needs some sharp engineering to pull itself out of this mess and make chips that remain competitive with Intel. But engineering isn't everything. Meyer holds 40 patents, Intel CEO Paul Otellini holds zero. Whose company made $1 billion this quarter and whose company lost $1 billion?
"It's one thing to be a tech leader, it's another thing to be a market leader and AMD needs that marketing and management prowess," said McGregor. "I don't know if Dirk is the right guy. He's been in this management transition for two years and we haven't seen much."
Prairie said the company is retrenching after its rapid growth early this decade. "From the period of 2003 through 2006, we went on an incredible growth trajectory in terms of number of products and employees. Like all companies that grow very large very quickly, we lost some efficiencies and didn't scale successfully. So what he's doing is focusing the company back down a little on the core techs that are going to make us successful

AMD ATHLON

AMD with the Athlon processor have implemented a very advanced CPU design indeed. The feature list of the Athlon is very impressive as it includes features such as:

  • 128 kb level 1 cache
  • 200 MHz Alpha EV6 Front Side Bus
  • 512 Kb to 8 Mb level 2 cache running from 1/3 to full clock speed
  • Seven Issue Superscalar Architechture
  • A Fully Pipelined (Superscalar?) FPU
  • 19 New 3DNow! Instructions
  • AGP 4X support (not with AMD 750 chipset, thus not available at launch)
  • UDMA 66 support native to chipset
  • Slot A motherboard interface (similar to Intel’s Slot 1)
  • Support for PC-100 & PC-133 SDRAM, DDR RAM & possibly Rambus in the future.

These features allow the Athlon a number of advantages over existing x86 CPU designs, thus making it the first 7th generation x86 processor. What we can see is that the increase in level 1 cache to 128 Kb could potentially bring performance benefits to a wide range of applications, especially when the chip is used in a multi-processing envioronment. This coupled to a flexible level 2 cache arrangement, which allows AMD to tailor the Athlon to particular markets by fitting it in different amounts and speeds. This gives AMD the ability to produce a range of Athlon’s similar to Intel’s P6 line with a range as broadly similar to the Celeron to Xeon line up.

Most dramatically, the Athlon features a fully pipelined Floating Point Unit thus allowing AMD to redress the deficiency in FPU performance that their CPU’s encountered when compared to Intel offerings. Instead the Athlon offers PC users the first CPU which has a separate pipeline for FADD, FMUL and FSTOR instructions, unlike Intel’s P6 which only offers 1 full pipeline and another shared pipeline. This has allowed the Athlon to gain some very impressive benchmark scores in numerous 3D games and applications on the merit of its FPU’s sheer strength.

The Athlon also features a very advanced core, which features a 2048-entry branch prediction table, which stores the most commonly used instructions and attempts to predict the next operation. As the Athlon features shorter pipelines when compared to the P6 core, the penalty for a missed prediction is smaller.

AMD have not rested on their laurels as far as their 3DNow! Instruction set is concerned. With the Athlon, we see the 21 3DNow! Instructions being enhanced by a further 19 new instructions. Many of these offer the same prefetching functions that Intel’s SSE instructions use. The enhanced 3DNow! will be supported by DirectX 6.2.