HP introduces new 3PAR StoreServ 8000 series arrays for the midrange

HP introduces new 3PAR StoreServ 8000 series arrays for the midrange

HP 3PAR 8000 SeriesAfter almost a 3 year run, HP is replacing the 3PAR StoreServ 7000 series with all new 3PAR StoreSev 8000 series arrays.  This news comes while HP is celebrating how well its mid-range 3PAR arrays have been selling versus competitors. The new arrays features upgraded hardware including 16-gigabit fibre channel and 12-gigabit SAS connectivity for its drives and will feature the same fifth-generation ASIC that were introduced in the 20000 series arrays earlier this year.  The 8000 series also increases the density of storage possible across the board in the 3PAR arrays, reducing the footprint and increasing the top-end capacities.

In terms of portfolio, HP touts a single architecture, single OS and single management across a wide range of solutions with the HP 3PAR.  With the 8000 series introduction, the difference between 3PAR models comes down to the number of controller nodes and associated ports, the types of drives in the array and the number of ASICs in the controllers.  The 8000 series features a single ASIC per controller node and the 20000 series features 2 ASICs per controller node along with more CPU capacity and more RAM for caching.

Both the 8000 and 20000 series arrays feature the 3PAR Gen5 ASIC, which is the latest generation introduced earlier in 2015.  If history repeats, additional capabilities of the Gen5 ASIC will get unlocked by future software upgrades on these two new series of arrays, but out of the gate, the new platforms are already touting density and performance gains in the new platforms.  HP says that they have increased density by 4x, performance 30 to 40 percent and decreased latency by 40 percent between the 7000 and 8000 series arrays.  HP says the 8000 series can provide up to 1 million IOPS at 0.387 ms latency.

HP also announced a new 20450 all-flash starter kit.  This model scales to a maximum of 4 controller nodes as opposed to 8 controller nodes in the 20800 and 20850 models. The 20000 series are the high-end storage arrays HP introduced earlier this year to replace the 10000 series arrays, and are typically targeted at large enterprise and service providers.

That rounds out the HP 3PAR portfolio with the following models:

  • HP 3PAR StoreServ 8200 is the low-end dual-controller model that scales up to 750TB of raw capacity
  • HP 3PAR StoreServ 8400 scales up to 4 controller nodes and is capable of scaling out to 2.4PB of raw capacity
  • HP 3PAR StoreServ 8440 is the converged flash array that provides similiar high performance to an 8450 array, but with the ability to also have spinning disks.  It scales up to 4 controller nodes and includes an increased amount of cache on the controller pairs, comparable to the cache on node with an all-flash array.
  • HP 3PAR StoreServ 8450 is the all-flash storage array scales up to 4 controller nodes and up to 1.8PB of raw capacity and a usable capacity over 5.5PB.  This is the model HP talks about when it says 1 million IOPS at under 1 ms of latency.
  • HP 3PAR StoreServ 20450, a quad-controller, all-flash configuration with larger scale than the 3PAR 8450
  • HP 3PAR StoreServ 20800, the workhorse array with up to 8 controller nodes and a mix of hard disk and solid state drives.
  • HP 3PAR StoreServ 20850, the all-flash configuration of the 20000 series.

3PAR8000-1-1024x441

HP announced the new 8450 all-flash array is available in 2U starter kit priced at just $19,000 for 6TB of usable storage.  When HP talks about usable storage and the all-flash array, it assumes a 4 to 1 compaction using its thin-provisioning and thin-deduplication – both native, realtime capabilities powered by the ASIC.  The same array can also be configured with up to 280TB of usable capacity in just 2U of space.

All this news comes just in time for VMworld, where HP is going to be showing the new arrays publicly for the first time.  I look forward to checking them out on the show floor and talking with some HP folks to find out more.

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

EMC VNX 5700

Please note that the EMC VNX 5700 is now end of availability. It has been replaced by the EMC VNX 5800. Click here to find out more.
VNX SERIES MODELS AVAILABLE

EMC VNX 5200
EMC VNX 5400
EMC VNX 5600
EMC VNX 5800
EMC VNX 7600
EMC VNX 8000
EMC VNX-F
EMC VNX-CA
IDEAL FOR: Medium-sized to enterprise businesses. VNX5700 supports up to 500 disk drives, scales up to 984TB, delivers maximum performance for virtualized workloads and features flash-optimized scalabiliy.
VNX 5700 HIGH PERFORMANCE

The EMC VNX series delivers high-performing unified storage with unsurpassed simplicity and efficiency, optimized for virtual applications. With the VNX Series, you’ll achieve new levels of performance, protection, compliance, and ease of management.
News Article: Proact to deliver EMC VNX5700 Disk System to the University of Oulu
KEY FEATURES OF THE VNX5700

Unified: A single platform for all file and block data services. Centralized management makes administration simple. Data efficiency services reduce your capacity requirements up to 50 percent.
Optimized: Optimize for virtual applications with VMware and Microsoft Hyper-V integration. You’ll triple the speed of virtualized SQL and Oracle workloads and boot up to 1,000 virtual desktops in fewer than 8 minutes.
High Performance: Take advantage of Flash-optimized performance from an all Flash array by using Flash drives for extendable cache and in the virtual storage pool. High-bandwidth configurations are ideal for data warehousing.
VNX 5700 RESOURCES

Download the EMC VNX Series Family Overview
Download the EMC VNX Series Specifications Overview.
Download the EMC VNX Series Software Suites Overview
Watch Video: Discussion of VNX performance
Watch Video: Rich Napolitano On The New EMC VNX Family
PROACT EMC VNX 5700 SOLUTIONS

Proact can also offer a complete design, implementation, configuration, training and support service for all EMC solutions. Our professional services team numbers more than 350 skilled and experienced consultants and engineers, and our Enterprise class support is delivered 24×7 and in local languages across 13 European countries providing an end-to-end service wrap for all your needs. For more information how Proact can help click here.
Product Comparison
Technical Specifications

vnx5100
vnx5300
vnx5500
vnx5700
vnx7500

VNX5100
VNX5300
VNX5500
VNX5700
VNX7500
Max Number of drives
75
125
250
500
1000
Max Raw Capacity
150TB
240TB
480TB
984TB
1,974TB
Max SAN Hosts
512
2,048
4,096
4,096
8,192
Max Number of Pools
10
20
40
40
60
Max Number of LUNs
512
2,048
4,096
4,096
8,192
Max File System Size
n/a
16TB
16TB
16TB
16TB
Max Usable File Capacity per X-Blade
n/a
200TB
256TB
256TB
256TB
Drive types
FLASH, SAS, NL-SAS
FLASH, SAS, NL-SAS
FLASH, SAS, NL-SAS
FLASH, SAS, NL-SAS
FLASH, SAS, NL-SAS
File: Number of X-Blades
None
1 or 2
1 or 2 or 3
2 or 3 or 4
2 to 8
File: Protocols
n/a
NFS, CIFS, MPFS, pNFS
NFS, CIFS, MPFS, pNFS
NFS, CIFS, MPFS, pNFS
NFS, CIFS, MPFS, pNFS
Block: Number of Storage Processors
2
2
2
2
2
Block: Protocols
FC
FC
iSCSI
FCoE
FC
iSCSI
FCoE
FC
iSCSI
FCoE
FC
iSCSI
FCoE
Management and Base Software
UnisphereProtocolsVirtual ProvisioningSAN Copy
UnisphereProtocolsFile Dedupe / CompressionBlock CompressionVirtual ProvisioningSAN Copy

UnisphereProtocolsFile Dedupe / CompressionBlock CompressionVirtual ProvisioningSAN Copy

UnisphereProtocolsFile Dedupe / CompressionBlock CompressionVirtual ProvisioningSAN Copy

UnisphereProtocolsFile Dedupe / CompressionBlock CompressionVirtual ProvisioningSAN Copy
Solution Overviews
Top 10 Reasons Why Customers Deploy Virtualized Microsoft Applications on EMC VNX
Top 10 Reasons Why Customers Deploy Virtualized Oracle Environments on EMC Unified Storage
Top Five Reasons Why Customers Deploy Vmware View on EMC Unified Storage

Big Data Definition

Big data is an evolving term that describes any voluminous amount of structured, semi-structured and unstructured data that has the potential to be mined for information.

Big data can be characterized by 3Vs: the extreme volume of data, the wide variety of types of data and the velocity at which the data must be must processed. Although big data doesn’t refer to any specific quantity, the term is often used when speaking about petabytes and exabytes of data, much of which cannot be integrated easily.

Because big data takes too much time and costs too much money to load into a traditional relational database for analysis, new approaches to storing and analyzing data have emerged that rely less on data schema and data quality. Instead, raw data with extended metadata is aggregated in a data lake and machine learning and artificial intelligence (AI) programs use complex algorithms to look for repeatable patterns.

Big data analytics is often associated with cloud computing because the analysis of large data sets in real-time requires a platform like Hadoop to store large data sets across a distributed cluster and MapReduce to coordinate, combine and process data from multiple sources.

Because big data takes too much time and costs too much money to load into a traditional relational database for analysis, new approaches to storing and analyzing data have emerged that rely less on data schema and data quality. Instead, raw data with extended metadata is aggregated in a data lake and machine learning and artificial intelligence (AI) programs use complex algorithms to look for repeatable patterns.

Big data analytics is often associated with cloud computing because the analysis of large data sets in real-time requires a platform like Hadoop to store large data sets across a distributed cluster and MapReduce to coordinate, combine and process data from multiple sources.

مجازی سازی

مفهوم مجازی سازی سرورها چندین سال است که مطرح شده است، اما مدت کوتاهی است که از آن استفاده می شود. ایده اولیه مجازی سازی سرورها این است که چندین سرور را می توان روی یک سیستم داشت. اغلب سرورها معمولا از ۱۰% توان سخت افزاری سیستم استفاده می کنند. مجازی سازی به ما اجازه می دهد چندین سیستم عامل را روی یک سیستم (سخت افزار) نصب کنیم و از توان سیستم بهره لازم را ببریم در دنیای شبکه و مجازی سازی برنامه های متعددی بدین منظور وجود دارد.

1. توضیحات مفاهیم مجازی سازی

2. نصب و راه اندازی ESXi 5

3. نصب vSphere Client

4. نصب vsphere web client

5. ایجاد Virtual Machine

6. نصب vCenter Server

7. بررسی مفاهیم Storage

مخاطبین این دوره :

مشاوران ،مدیران پروژه و متخصصین فناوری اطلاعات و ارتباطات
متخصصین امنیت فیزیکی و تجهیزات
افرادی که قصد طراحی و راه اندازی مراکز داده و اتاق سروررا دارند
دانشجویان متقاضی کار در طراحی و نگهداری اتاق سرور
سوالی هم داشتید یا خواستید ثبت نام کنید :

ادامه مطلب

storage consolidation

Storage consolidation, also called storage convergence is a method of centralizing data storage among multiple servers. The objective is to facilitate data backup and archiving for all subscribers in an enterprise, while minimizing the time required to access and store data. Other desirable features include simplification of the storage infrastructure, centralized and efficient management, optimized resource utilization, and low operating cost.

There are three storage consolidation architectures in common use: network-attached storage (NAS), redundant array of independent disks (RAID), and the storage area network (SAN). In NAS, the hard drive that stores the data has its own network address. Files can be stored and retrieved rapidly because they do not compete with other computers for processor resources. In RAID storage consolidation, the data is located on multiple disks. The array appears as a single logical hard drive. This facilitates balanced overlapping of input/output (I/O) operations and provides fault tolerance, minimizing downtime and the risk of catastrophic data loss. The SAN is the most sophisticated architecture, and usually employs Fibre Channel technology. SANs are noted for high throughput and ability to provide centralized storage for numerous subscribers over a large geographic area. SANs support data sharing and data migration among servers.

Fibre Channel over IP (FCIP or FC/IP)

Fibre Channel over IP (FCIP or FC/IP, also known as Fibre Channel tunneling or storage tunneling) is an Internet Protocol (IP)-based storage networking technology developed by the Internet Engineering Task Force (IETF). FCIP mechanisms enable the transmission of Fibre Channel (FC) information by tunneling data between storage area network (SAN) facilities over IP networks; this capacity facilitates data sharing over a geographically distributed enterprise. One of two main approaches to storage data transmission over IP networks, FCIP is among the key technologies expected to help bring about rapid development of the storage area network market by increasing the capabilities and performance of storage data transmission.

FCIP Versus iSCSI
The other method, iSCSI, generates SCSI codes from user requests and encapsulates the data into IP packets for transmission over an Ethernet connection. Intended to link geographically distributed SANs, FCIP can only be used in conjunction with Fibre Channel technology; in comparison, iSCSI can run over existing Ethernet networks. SAN connectivity, through methods such as FCIP and iSCSI, offers benefits over the traditional point-to-point connections of earlier data storage systems, such as higher performance, availability, and fault-tolerance. A number of vendors, including Cisco, Nortel, and Lucent have introduced FCIP-based products (such as switches and routers). A hybrid technology called Internet Fibre Channel Protocol (iFCP) is an adaptation of FCIP that is used to move Fibre Channel data over IP networks using the iSCSI protocols.

 

FCoE (Fibre Channel over Ethernet)

FCoE (Fibre Channel over Ethernet) is a storage protocol that enable Fibre Channel communications to run directly over Ethernet. FCoE makes it possible to move Fibre Channel traffic across existing high-speed Ethernet infrastructure and converges storage and IP protocols onto a single cable transport and interface.

The goal of FCoE is to consolidate input/output (I/O) and reduce switch complexity as well as to cut back on cable and interface card counts. Adoption of FCoE been slow, however, due to a scarcity of end-to-end FCoE devices and a reluctance on the part of many organizations to change the way they implement and manage their networks.

Traditionally, organizations have used Ethernet for TCP/IP networks and Fibre Channel for storage networks. Fibre Channel supports high-speed data connections between computing devices that interconnect servers with shared storage devices and between storage controllers and drives. FCoE shares Fibre Channel and Ethernet traffic on the same physical cable or lets organizations separate Fibre Channel and Ethernet traffic on the same hardware.

FCoE uses a lossless Ethernet fabric and its own frame format. It retains Fibre Channel’s device communications but substitutes high-speed Ethernet links for Fibre Channel links between devices.

FCoE works with standard Ethernet cards, cables and switches to handle Fibre Channel traffic at the data link layer, using Ethernet frames to encapsulate, route, and transport FC frames across an Ethernet network from one switch with Fibre Channel ports and attached devices to another, similarly equipped switch.

FCoE is often compared to iSCSI, an Internet Protocol(IP)-based storage networking standard.

Veeam Backup and Replication 9 Update 1

Veeam Backup and Replication 9 Update 1 disponibile Paolo Valsecchi 18/04/2016 Nessun commento
altaro
veeam9upd1available01
Veeam ha rilasciato Backup and Replication 9 Update 1 che include più di 300 miglioramenti e risoluzione di bug.

Prima di procedere con l’aggiornamento, verificare che la versione in uso sia la 9.0.0.902. Per controllare quale versione si sta utilizzando, cliccare dal menu principale la voce Help > About.

veeam9upd1available02
Verificare che la versione utilizzata sia quella richiesta.

veeam9upd1available03

Miglioramenti
Con l’Update1 sono stati introdotti diversi miglioramenti e risoluzione di bug.

Generale

Potenziata l’estensione della selezione logica per i repository di backup scale-out con e senza l’abilitazione della catena file di backup per-VM.
Aggiunto il supporto per la modalità di incremental backup per la protezione della corruzione a livello storage e le funzionalità di file compact e defrag.
L’algoritmo di selezione Guest Interaction Proxy è stato migliorato per incrementare la prestazione di selezione del proxy.
Cloud Connect Replication

Supporto per il processo di replica seeding.
Supporto per la replica dal backup (ad eccezione dei file di backup salvati nei repository cloud).
Migliorata la reportistica su problematiche di failover.
VMware

Il controllo Storage fingerprint può essere ora disabilitato utilizzando il valore di registro SshFingerprintCheck (DWORD).
I job effettuati su Datastore possono essere ora impostati per escludere le VM con ISO montati residenti nel datastore selezionato.
Application-aware processing

Per incrementare le probabilità di successo nel processo Microsoft VSS, il timeout della snapshot VSS di default è stato duplicato a 1200 sec.
Oracle

Aggiunto il supporto per Oracle ASM su dischi GPT.
Le operazioni di Redo dei log di backup per distribuzioni non-ASM sono ora effettuate direttamente dalla directory di log senza prima copiare i log in una cartella temporanea.
Aggiunta la funzione di ignorare la modalità database NOARCHIVELOG senza warning.
Aggiunta la funzione per disabilitare la lavorazione Oracle senza dover disabilitare il processo application-aware per il resto delle applicazioni in esecuzione sulla stessa VM.
Migliorata l’efficienza per l’interruzione del processo di log di backup nel caso di perdita di connettività di rete o di reboot del repository di backup.
Veeam Explorers

Migliorato l’avvio di Veeam Explorer per SQL Server e l’inizializzazione del processo di restore per grandi set di backup.
Aggiunto il supporto per esportare grossi file PST con Veeam Explorer per Microsoft Exchange.
Tape

Migliorata l’efficienza nel rilevare, cancellare, espellere ed esportare le cassette.
Migliorata l’indicazione GFS per la selezione logica del punto di restore per certi scenari.
SureBackup

I job di SureBackup non rimuovono più la snapshot della VM prima della cancellazione della VM temporanea stessa velocizzando il tempo richiesto per il completamento dell’operazione.
La descrizione dei role e gli script di test delle applicazioni nei SureBackup vengono ora ottenuti dal server di backup anzichè dalla console.
Interfaccia Utente

Migliorata l’efficienza dell’interfaccia utente e ridotto il carico della configurazione del database SQL negli ambienti con un elevato numero di backup e repliche.
Nelle appliance di storage di deduplica gli utenti ora ricevono un warning se il job di backup o le impostazioni dell’appliance di storage non rispettano le impostazioni raccomandate.
Aggiornamento

La console ora si auto aggiorna quando connessa ad un server di backup con una versione superiore.
Le Network extensions appliances sono ora incluse nel processo di auto aggiornamento dei componenti remoti esistenti.

Installare l’Update 1
Riavviare il server Veeam prima di installare l’aggiornamento per rimuoverre eventuali blocchi nei servizi Veeam e, dopo il riavvio, fermare tutti i job e servizis Veeam.

Dal sito web Veeam scaricare l’Update 1 (è necessario essere logati) ed avviare l’installer. Cliccare Next per avviare l’installation wizard.

veeam9upd1available04
Cliccare su Install per procedere con l’aggiornamento.

veeam9upd1available05
L’aggiornamento viene installato nel sistema.

veeam9upd1available06
Cliccare su Finish per uscire dal wizard.

veeam9upd1available07
Avviare Veeam Backup and Replication ed aggiornare i componenti remoti. Cliccare su Next per procedere.

veeam9upd1available08
I componenti vengono aggiornati.

veeam9upd1available09
Quando tutti i componenti sono stati correttamente processati, cliccare su Finish per chiudere la finestra.

veeam9upd1available10
Dopo l’aggiornamento, il numero della versione installata diventa 9.0.0.1491.

veeam9upd1available11
Informazioni aggiuntive possono essere trovate nelle Release Notes.

logical unit number (LUN)

A logical unit number (LUN) is a unique identifier to designate an individual or collection of physical or virtual storage devices that execute input/output (I/O) commands with a host computer, as defined by the Small System Computer Interface (SCSI) standard.

SCSI is a widely implemented I/O interconnect that commonly facilitates data exchange between servers and storage devices through transport protocols such as Internet SCSI (iSCSI) and Fibre Channel (FC). A SCSI initiator in the host originates the I/O command sequence that is transmitted to a SCSI target endpoint or recipient storage device. A logical unit is an entity within the SCSI target that responds to the SCSI I/O command.

How LUNs work
LUN setup varies by system. A logical unit number is assigned when a host scans a SCSI device and discovers a logical unit. The LUN identifies the specific logical unit to the SCSI initiator when combined with information such as the target port identifier. Although the term LUN is only the identifying number of the logical unit, the industry commonly uses LUN as shorthand to refer to the logical unit itself.

The logical unit may be a part of a storage drive, an entire storage drive, or all of parts of several storage drives such as hard disks, solid-state drives or tapes, in one or more storage systems. A LUN can reference an entire RAID set, a single drive or partition, or multiple storage drives or partitions. In any case, the logical unit is treated as if it is a single device and is identified by the logical unit number. The capacity limit of a LUN varies by system.

A LUN is central to the management of a block storage array in a storage-area network (SAN). Using a LUN can simplify the management of storage resources because access and control privileges can be assigned through the logical identifiers.

LUN zoning and masking
SANs control host access to LUNs to enforce data security and data integrity. LUN masking and switch-based zoning manage the SAN resources accessible to the attached hosts.

LUN zoning provides isolated paths for I/O to flow through a FC SAN fabric between end ports to ensure deterministic behavior. A host is restricted to the zone to which it is assigned. LUN zoning is generally set up at the switch layer. It can help to improve security and eliminate hot spots in the network.

LUN masking restricts host access to designated SCSI targets and their LUNs. LUN masking is typically done at the storage controller, but it can also be enforced at the host bus adapter (HBA) or switch layer. With LUN masking, several hosts and many zones can use the same port on a storage device, but they can see only the specific SCSI targets and LUNs they have been assigned.

LUNS and virtualization
A LUN constitutes a form of virtualization in the sense that it abstracts the hardware devices behind it with a standard SCSI method of identification and communication. The storage object represented by the LUN can be provisioned, compressed and/or deduplicated as long as the representation to the host does not change. A LUN can be migrated within and between storage devices, as well as copied, replicated, snapshotted and tiered.

A virtual LUN can be created to map to multiple physical LUNs or a virtualized capacity created in excess of the actual physical space available. Virtual LUNs created in excess of the available physical capacity help to optimize storage use, because the physical storage is not allocated until the data is written. Such a virtual LUN is sometimes referred to as a thin LUN.

A virtual LUN can be set up at the server operating system (OS), hypervisor or storage controller. Because the virtual machine (VM) does not see the physical LUN on the storage system, there is no need for LUN zoning.

Software applications can present LUNs to VMs running on guest OSes. Proprietary technology such as VMware’s Virtual Volumes can provide the virtualization layer and the storage devices to support them with fine-grain control of storage resources and services.

Types of LUNs
The underlying storage structure and logical unit type may play a role in performance and reliability. Examples include:

Mirrored LUN: Fault-tolerant LUN with identical copies on two physical drives for data redundancy.
Concatenated LUN: Consolidates several LUNs into a single logical unit or volume.
Striped LUN: Writes data across multiple physical drives, potentially enhancing performance by distributing I/O requests across the drives.
Striped LUN with parity: Spreads data and parity information across three or more physical drives. If a physical drive fails, the data can be reconstructed from the data and parity information on the remaining drives. The parity calculation may have an impact on write performance.

Installing and Configuring FreeNAS

FreeNAS is an open source network-attached storage (NAS) operating system based on BSD and the ZFS filesystem with integrated RAID support. FreeNAS operating system is totally based on BSD and can be installed on virtual machines or in physical machines to share data storage via a computer network.

Using FreeNAS software you can easily build your own centralized and easily accessible data storage at home and same can be managed via a dedicated web interface originally written in PHP language, later re-written using Python/Django language from scratch.

Install FreeNAS Server
FreeNAS Installation and Configuration
FreeNAS supports Linux, Windows and OS X and numerous virtualization hosts such as VMware and XenServer using protocols such as CIFS (SAMBA), NFS, iSCSI, FTP, rsync etc.

Home users can build FreeNAS storage to store there videos, files and stream from FreeNAS to every network devices or to smart TVs etc. If you are planning to build torrent site, you can use FreeNAS to setup one for you. There are several plugins available for FreeNAS which is as follows.

Own-Cloud = To build Own cloud Storage.
Plex Media Server = To build Own video streaming server.
Bacula = Used as a network backup server.
Transmission = Create torrent server.
Features of FreeNAS

Support ZFS file system.
Support inbuilt RAID with parity support, cronjobs, Smart tests.
Supports Directory services such as LDAP, NIS, NT4, Active Directory.
Support NFS, FTP, SSH, CIFS, iSCSI Protocols.
Supports for windows based file-system such as NTFS and FAT.
Periodic Snapshot and replication support, rsync.
Web interface with GUI and SSL.
Reporting systems such as email notification.
Disk Encryption and much more features are available.
Adding UPS for Backup power systems.
A Rich GUI graph reports for Memory, CPU, Storage, Network etc..
In this FreeNAS 4-article series, we will cover the installation and configuration of FreeNAS with storage and in later articles will cover setting up a video streaming & torrent server.

Part 1: Installing and Configuring FreeNAS 9.2.1.8
Part 2: Configuring FreeNAS Settings and Adding ZFS Storage
Part 3: Create Your Own “Home Media Streaming Server” Using Plex with FreeNAS
Part 4: Upgrading FreeNAS from Older Version to Newer
My Server Setup

Hardware : Virtual Machine 64-bit
Operating System : FreeNAS-9.2.1.8-RELEASE-x64
IP Address : 192.168.0.225
8GB RAM : Minimum RAM
1 Disk (5GB) : Used for OS Installation
8 Disks (5GB) : Used for Storage
Download FreeNAS 9.2.1.8
To set up a FreeNAS operating system, you will need to download latest stable installation ISO Image (i.e. version 9.2.1.8) from the FreeNAS download page, or you can use the following links to download image for your system architecture. I’ve included download links for CD/DVD and USB bootable images of FreeNAS, so select and download images as per your requirements.

CD/DVD Images

Download FreeNAS-9.2.1.8-RELEASE-x86.iso – (185MB)
Download FreeNAS-9.2.1.8-RELEASE-x64.iso – (199MB)
USB Images

Download FreeNAS-9.2.1.8-RELEASE-x86.img.xz – (135MB)
Download FreeNAS-9.2.1.8-RELEASE-x64.img.xz – (143MB)
Installing FreeNAS System
1. Now its time to install and configure FreeNAS. As every Operating system FreeNAS too have the similar steps for installation and it won’t take more than 2 minutes to Install.

2. After you download FreeNAS ISO image from the links above, if you’ve a CD/DVD drive, burn that ISO image to a disc and then boot it, or if you’re using USB Image you can directly boot it.

3. After booting the system with FreeNAS image, by default it will start the installation, if not we have to press enter to continue the installation.

Booting FreeNAS
Booting FreeNAS
4. For installing FreeNAS, we have to choose Install/Upgrade. This will install the FreeNAS if its not existed.

Install FreeNAS
Install FreeNAS
5. In this step, we need to choose where FreeNAS should be installed. We have total 9 drives, so here I’m using first 5 GB ada0 drive for my FreeNAS installation and other 8 Drives are used for Storage (will be discussed in next part of this series).

Choose ada0 drive from the listed drives and press Enter to continue.

FreeNAS Install Drive
Choose FreeNAS Install Drive
6. After selecting the drive, on the next screen you will warned for data loss, If you have any important data in that selected drive, please take a backup before installing FreeNAS on the drive.

After pressing ‘Yes‘ all the data in that drive will be destroyed during installation.

Warning: Please take a backup of selected drive before starting FreeNAS setup.

FreeNAS Intallation Drive
Drive Data Loss Warning
7. After few minutes it will take us to the end of the installation process. Choose OK to reboot the machine and remove the installation Disk.

FreeNAS Intallation Completed
FreeNAS Intallation Completed
8. On the next screen, choose the 3rd option to reboot the machine and remove the setup Disk.

Reboot System
Reboot System
9. After FreeNAS setup completed, we can get the console setup menu to add the DNS IP Address to access the FreeNAS web dashboard.

By default at first it will assign a dynamic IP address and we have to configure it manually. Here we can see that, we’ve got a dynamic IP address as 192.168.0.10 now we have to configure our static ip.

FreeNAS Console Setup
FreeNAS Console Setup
Note: First let me configure the DNS, I have a valid name resolver at my end, so let me configure my DNS settings.

10. To configure DNS choose number 6 and press enter, then we have to enter the DNS information such as domain, IP address of DNS server and Press Enter.

Configuring the DNS settings before IP Address will resolve the name from DNS. In your side, if you don’t have a valid DNS server you can skip this step.

Configure FreeNAS DNS
Configure DNS for FreeNAS
11. After configuring DNS settings, now it’s time to configure network interface. To configure the interface, press 1 and select the default first interface.

Use the following settings for configuring static IP:

Enter an option from 1-11: 1
1) vtnet0
Select an interface (q to quit): 1
Reset network configuration? (y/n) n
Configure interface for DHCP? (y/n) n
Configure IPv4? (y/n) y
Interface name: eth0
IPv4 Address: 192.168.0.225
IPv4 Netmask: 255.255.255.0
Savinf interface configuration: OK
Configure IPv6? n
Finally, at last choosing IPv6 no and pressing enter will configure the interface and get saved automatically.

Configure FreeNAS Network
Configure FreeNAS Network
12. After configuring network interface settings, you will see that the IP address has been changed to 192.168.0.225 from 192.168.0.10. Now we can use this address to access FreeNAS GUI from any one of the web browser.

Confirm FreeNAS IPaddress
Confirm FreeNAS IPaddress
13. To access the FreeNAS GUI interface, open the web browser and enter the ip address which we had used to configure the interface setup.

http://192.168.0.225
At first login, we need to define a PASSWORD for the root user to access GUI interface. Set a strong password for your storage server and continue login.

Set FreeNAS root Password
Set FreeNAS root Password
14. After login, you will see informations about FreeNAS server such as domain name, version, total memory available, system time, up time, system load, etc.

FreeNAS Server Information
FreeNAS Server Information
That’s it, In this article, we’ve installed and configured the FreeNAS server. In the next article we will be discussing on how to configure FreeNAS settings in step by step process and how can we define storage in FreeNAS, till then stay tuned for updates and don’t forget to add your comments.

Read More: http://www.freenas.org/