In one of our recent articles published on this forum, we provided a comprehensive listing of new and enhanced features introduced in the Integration Services incorporated into soon-to-be released SQL Server 2012. In case you are interested in exploring them in-depth, we intend to examine each of them in detail in upcoming presentations. However, for the sake of completeness and chronological order, we will start by guiding you through the steps required to install and configure this product. Note that our discussion is based on the current (as of February 2011) SQL Server 2012 Release Candidate 0, downloadable from the Microsoft Download Center for both x86 and x64 platforms (as well as available in the form of an .iso file containing both versions).
Today I can finally speak publicly about a new cloud service that has seen the light and I had the pleasure of participating in CTP phase.
Since SQL Azure is available on the market, one of the main workhorses has been how to implement backups of our databases. Initially the only way I had was to make a copy of the database in another SQL Azure, which meant the cost of having to pay for the additional database. Later began to appear new tools that allowed the export/import the data and schemas through SQL scripts or through the new “bacpac” format. A good summary of them was made on this Luis Panzano’s blog entry, showing the advantages and disadvantages of each of them.
Takeaway: When planning a client server, this guide will help ensure the RAID version selected best fits the requirements.
Many IT consultants build and deploy new servers for clients on a very regular (sometimes weekly) basis. This requires you to make numerous decisions about chassis design, processor count and speed, disk capacity, redundant power supplies, disk speeds, memory, operating systems, and warranty replacement windows. It can be easy to overlook the specified RAID version, even though this detail constitutes a critical component of server architecture.
Steven Mackay writes that Virginia Tech’s new “HokieSpeed” supercomputer will be a veritable “War Horse” for researchers working on diverse science.
You may remember how Virginia Tech crashed the supercomputing arena in 2003 with System X, a novel Apple server cluster powered by the company’s G5 processors. Ranked at number 96 on the TOP500 and number 11 on the Green500, the new HokieSpeed supercomputer is 22 times faster and yet a quarter of the size of X, with a double-precision peak of 240 teraflops.
Best 2-CPU server result ever - with Violin's chips. Oracle claims a world-record TPC-C result with its database running on a Cisco server and not an Exadata system, although doesn't mention that two Violin memory flash arrays were needed.
A Cisco UCS C250 extended memory server with two six-core Xeon X5690 processors, 384GB of DRAM, and two Violin Memory flash arrays (5.3TB V-3205 and 16.3TB V-6000) ran Oracle's 11g database on Oracle Linux, and scored 1,053,100 transactions per minute (tpmC), with a cost per transaction of $0.58.
As with any server product, there are lots of ways to configure UCS, including different levels of CPU, memory and storage. Cisco has a 29-page document to help you get it right, and 29 pages are not overkill. To get an idea of what this might cost, we configured two separate systems: one with 40 dual-socket blades, and another with 80 of the same blades.
We picked Intel 5600-series (Westmere-EP) X5675 CPUs, each with six cores running at 3.06 GHz, an expensive but pretty common choice for enterprise virtualization workloads. We also packed in 96GB of memory for each system, and put in only a single small SATA drive for booting, logging and diagnostics.