Install Oracle 10G On Red Hat Enterprise Linux 512Mb

Posted on

Build Your Own Oracle RAC 1. Cluster on Oracle Enterprise Linux and i. SCSIby Jeffrey Hunter Learn how to set up and configure an Oracle RAC 1. Release 2 development cluster on Oracle Linux for less than US$2,7. Introduction. One of the most efficient ways to become familiar with Oracle Real Application Clusters (RAC) 1. Oracle RAC 1. 1g cluster. There's no better way to understand its benefits—including fault tolerance, security, load balancing, and scalability—than to experience them directly.

Unfortunately, for many shops, the price of the hardware required for a typical production RAC configuration makes this goal impossible. A small two- node cluster can cost from US$1.

Install Oracle 10g On Red Hat Enterprise Linux 512mb Ram

US$2. 0,0. 00. This cost would not even include the heart of a production RAC environment, the shared storage. In most cases, this would be a Storage Area Network (SAN), which generally start at US$1. For those who want to become familiar with Oracle RAC 1. Oracle RAC 1. 1g Release 2 system using commercial off- the- shelf components and downloadable software at an estimated cost of US$2,2.

Install Oracle 10G On Red Hat Enterprise Linux 512Mb

At A Glance. Combining concentrated 1U compute power, integrated Lights-Out management, and essential fault tolerance, the DL360 is optimized for space. Oracle Database Version Red Hat OS Version Architecture Comments; Oracle 10g R2 (10.2.0.1.0) Red Hat Enterprise Linux Advanced Server 4 Update 2 (RHEL AS 4 U2). Warning: Invalid argument supplied for foreach() in /srv/users/serverpilot/apps/jujaitaly/public/index.php on line 447. Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5) by Jeff Hunter, Sr. Database Administrator Contents. Introduction; Oracle RAC 11g Overview.

This document contains system and platform-specific information for Oracle Fusion Middleware products. Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases.

US$2,7. 00. The system will consist of a two node cluster, both running Oracle Enterprise Linux (OEL) Release 5 Update 4 for x. All shared disk storage for Oracle RAC will be based on i. SCSI using Openfiler release 2. For example, the shared Oracle Clusterware files (OCR and voting files) and all physical database files in this article will be set up on only one physical disk, while in practice that should be configured on multiple physical drives.

In addition, each Linux node will only be configured with two network interfaces . For a production RAC implementation, the private interconnect should be at least Gigabit (or more) with redundant paths and .

A third dedicated network interface ( eth. Gigabit network for access to the network storage server (Openfiler). Oracle Documentation. While this guide provides detailed instructions for successfully installing a complete Oracle RAC 1.

Oracle documentation (see list below) . In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options, installation, and administration with Oracle RAC 1. Oracle's official documentation site is docs. Network Storage Server. Powered by r.

Path Linux, Openfiler is a free browser- based network storage management utility that delivers file- based Network Attached Storage (NAS) and block- based Storage Area Networking (SAN) in a single framework. The entire software stack interfaces with open source applications such as Apache, Samba, LVM2, ext. Linux NFS and i. SCSI Enterprise Target. Openfiler combines these ubiquitous technologies into a small, easy to manage solution fronted by a powerful web- based management interface. Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its i.

SCSI capabilities to implement an inexpensive SAN for the shared storage components required by Oracle RAC 1. The operating system and Openfiler application will be installed on one internal SATA disk.

A second internal 7. GB 1. 5K SCSI hard disk will be configured as a single .

The Openfiler server will be configured to use this volume group for i. SCSI based storage and will be used in our Oracle RAC 1. Oracle grid infrastructure and the Oracle RAC database. Oracle Grid Infrastructure 1. Release 2 (1. 1. 2) With Oracle grid infrastructure 1. Release 2 (1. 1. 2), the Automatic Storage Management (ASM) and Oracle Clusterware software is packaged together in a single binary distribution and installed into a single home directory, which is referred to as the Grid Infrastructure home. You must install the grid infrastructure in order to use Oracle RAC 1.

Release 2. Configuration assistants start after the installer interview process that configure ASM and Oracle Clusterware. While the installation of the combined products is called Oracle grid infrastructure, Oracle Clusterware and Automatic Storage Manager remain separate products. After Oracle grid infrastructure is installed and configured on both nodes in the cluster, the next step will be to install the Oracle RAC software on both Oracle RAC nodes.

In this article, the Oracle grid infrastructure and Oracle RAC software will be installed on both nodes using the optional Job Role Separation configuration. One OS user will be created to own each Oracle software product . Throughout this article, a user created to own the Oracle grid infrastructure binaries is called the grid user. This user will own both the Oracle Clusterware and Oracle Automatic Storage Management binaries. The user created to own the Oracle database binaries (Oracle RAC) will be called the oracle user.

Both Oracle software owners must have the Oracle Inventory group ( oinstall) as their primary group, so that each Oracle software installation owner can write to the central inventory (ora. Inventory), and so that OCR and Oracle Clusterware resource permissions are set correctly. The Oracle RAC software owner must also have the OSDBA group and the optional OSOPER group as secondary groups. Automatic Storage Management and Oracle Clusterware Files. As previously mentioned, Automatic Storage Management (ASM) is now fully integrated with Oracle Clusterware in the Oracle grid infrastructure.

Oracle ASM and Oracle Database 1. Release 2 provide a more enhanced storage solution from previous releases. Part of this solution is the ability to store the Oracle Clusterware files; namely the Oracle Cluster Registry (OCR) and the Voting Files (VF . This feature enables ASM to provide a unified storage solution, storing all the data for the clusterware and the database, without the need for third- party volume managers or cluster file systems. Just like database files, Oracle Clusterware files are stored in an ASM disk group and therefore utilize the ASM disk group configuration with respect to redundancy.

For example, a Normal Redundancy ASM disk group will hold a two- way- mirrored OCR. A failure of one disk in the disk group will not prevent access to the OCR. With a High Redundancy ASM disk group (three- way- mirrored), two independent disks can fail without impacting access to the OCR. With External Redundancy, no protection is provided by Oracle. Oracle only allows one OCR per disk group in order to protect against physical disk failures. When configuring Oracle Clusterware files on a production system, Oracle recommends using either normal or high redundancy ASM disk groups.

If disk mirroring is already occurring at either the OS or hardware level, you can use external redundancy. The Voting Files are managed in a similar way to the OCR. They follow the ASM disk group configuration with respect to redundancy, but are not managed as normal ASM files in the disk group. Instead, each voting disk is placed on a specific disk in the disk group.

The disk and the location of the Voting Files on the disks are stored internally within Oracle Clusterware. The following example describes how the Oracle Clusterware files are stored in ASM after installing Oracle grid infrastructure using this guide. To view the OCR, use ASMCMD. ONLINE 4cbbd. 0de.

ORCL: CRSVOL1) . Please note that installing Oracle Clusterware files on raw or block devices is no longer supported, unless an existing system is being upgraded. Previous versions of this guide used OCFS2 for storing the OCR and voting disk files. This guide will store the OCR and voting disk files on ASM in an ASM disk group named +CRS using external redundancy which is one OCR location and one voting disk location. The ASM disk group should be be created on shared storage and be at least 2.

Install 1. 1g RAC on Linux - - Pre. Installation tasks. Click Here for the .

If not installed, then download and. YUM. rpm - q binutils.

ODBC. unix. ODBC- devel iscsi- initiator- utils I. You make sure that if you configure eth. Follow the below steps to configure these networks: (1). Change the hostname value by executing the below command: For Node node. I added the. following lines to /etc/sysctl. Every OS process needs semaphore.

It waits on for the resources. If the current value for any parameter is. Add the following lines to the /etc/security/limits. Add or edit the following line in. Do the same on both the nodes in cluster. Simply enter 'yes' and continue.

Afterwards, when oracle connects to. Simply ente'yes' and continue.

Afterwards, when oracle connects to. If. you get then below error message when try to connect to remote node, please. So the shared disk must support the concurrent access to all nodes in.

R1 RAC. There are different types of. Storage Management softwares out there that allow you to build NAS/SAN. I have. chosen 'openfiler' because it is Linux 2. Storage Management OS.

SCSI. You can then create volume groups on this device(s) and later. RAC nodes. Steps Involved to install/configure i. SCSI based IP. SAN.(1) Install openfiler OS(2) Attach external disks to this server.

I have attached the 5. GB WD USB (My. Book) hard drive. I planned to create 4. ASM file system each of 1.

GB and one for OCFS2,  asm- dsk. ASM: DATA and FLASH disk groups for database files and. Flash Recovery Area files. OCFS2: OCR, Voting Disks.(3) Configure openfiler setup - - iscsi- target/volume. I have followed the above guide to configure the openfiler system and create.

ASM disks and OCFS disks. Below are the sample screen shots for my openfiler setup. The external disk is presented to the server as SCSI disks as shown below.

In. my case it is /dev/sda. I have created physical volume on this device and then created volume group rac. The below 5 volumes  are created under the rac. Also. make sure that each volume allows the shared access to all the nodes in the. You can do that by clicking the 'Edit' link on the above screen for each volume name as shown below. The. below screen shows that both the nodes in the cluster has shared access on the.

Click on the General tab to add/modify the rac nodes info. The network. information provided in this table are private network for the shared storage. At. the End, make sure that i. SCSI protocol is enabled in openfiler. You can enable. by clicking on the services tab.(4) Discovering the volumes to the RAC nodes as scsi devices. NOTE. make sure that SELinux and firewall has been disabled on all the RAC nodes. If. not, then disable by .

If not, then download. RPM and install them. The format for the. An# example of these settings would be: ##Discovery.

Address=1. 0. 4. 1. Discovery. Address=1. Discovery. Address=1. This is the address of the nas- server#(b)Reboot All the Nodes and run the iscsi- ls command to see if the volumes have. RAC nodes as scsi devices.(c).

The scsi Id in this output maps to the Host ID on the. Volumes on the iscsi- target.

Host ID. Target ID. Volume Namediscovered as. Partitioning the Shared disk: I. ASM and one for OCFS. So, I have created a.

Create partitions from ONLY one of the. RAC nodes. This can be any node in cluster.

Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable. The number of cylinders for this disk is set to 9.

There is nothing wrong with that, but this is larger than 1. LILO)2) booting and partitioning software from other OSs(e. DOS FDISK, OS/2 FDISK)Warning: invalid flag 0x.

Command (m for help): n. Command actione   extendedp   primary partition (1- 4)p. Partition number (1- 4): 1. First cylinder (1- 9. Using default value 1. Last cylinder or +size or +size.

M or +size. K (1- 9. Using default value 9. Command (m for help): w. The partition table has been altered! Calling ioctl() to re- read partition table. Syncing disks. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.

The number of cylinders for this disk is set to 1. There is nothing wrong with that, but this is larger than 1. LILO)2) booting and partitioning software from other OSs(e. DOS FDISK, OS/2 FDISK)Warning: invalid flag 0x. Command (m for help): n. Command actione   extendedp   primary partition (1- 4)p.

Partition number (1- 4): 1. First cylinder (1- 1.

Using default value 1. Last cylinder or +size or +size. M or +size. K (1- 1. Using default value 1.

Command (m for help): w. The partition table has been altered! Calling ioctl() to re- read partition table.

Syncing disks. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable. The number of cylinders for this disk is set to 1. There is nothing wrong with that, but this is larger than 1. LILO)2) booting and partitioning software from other OSs(e. DOS FDISK, OS/2 FDISK)Warning: invalid flag 0x.

Command (m for help): n. Command actione   extendedp   primary partition (1- 4)p. Partition number (1- 4): 1. First cylinder (1- 1. Using default value 1. Last cylinder or +size or +size. M or +size. K (1- 1.

Using default value 1. Command (m for help): p. Disk /dev/sdc: 1.

GB, 1. 04. 85. 76. Units = cylinders of 1. Device Boot. End      Blocks   Id  System/dev/sdc. Linux. Command (m for help): w. The partition table has been altered!

Calling ioctl() to re- read partition table. Syncing disks. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.

The number of cylinders for this disk is set to 1. There is nothing wrong with that, but this is larger than 1. LILO)2) booting and partitioning software from other OSs(e. DOS FDISK, OS/2 FDISK)Warning: invalid flag 0x.

Using default value 1. Command (m for help): p.

Disk /dev/sdd: 1. GB, 1. 04. 85. 76. Units = cylinders of 1. Device Boot      Start. End      Blocks   Id  System/dev/sdd.

Linux. Command (m for help): w. The partition table has been altered! Calling ioctl() to re- read partition table. Syncing disks. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.

The number of cylinders for this disk is set to 1. There is nothing wrong with that, but this is larger than 1.

LILO)2) booting and partitioning software from other OSs(e. DOS FDISK, OS/2 FDISK)Warning: invalid flag 0x. Command (m for help): n.

Command actione   extendedp   primary partition (1- 4)p. Partition number (1- 4): 1. Walk Behind Concrete Crack Chaser Tool. First cylinder (1- 1. Using default value 1.

Last cylinder or +size or +size. M or +size. K (1- 1.

Using default value 1. Command (m for help): p. Disk /dev/sde: 1. GB, 1. 04. 85. 76. Units = cylinders of 1.

Device Boot. End      Blocks   Id  System/dev/sde. Linux. Command (m for help): w.

The partition table has been altered! Calling ioctl() to re- read partition table. Syncing disks. Devices disappear after reboot of Openfiler Server (nas- server): I.

Openfiler). This is most likely because the volume groups are. Openfiler server. I have included the necessary. Also make sure that firewall and SELinux is disabled on all the machines in. SOLUTION: I have Followed the below steps resolves the mentioned issue in my. Scan the. systems for the volume groups as root - - vgscan. Activate. the volumes as root - - vgchange - aystart.

On the. client machines (rac nodes), restart the iscsi service as root - - service. Confirm. that the iscsi devices are available as root - - iscsi- ls. In. /etc/rc. local on nas- server (openfiler) add. In. /etc/rc. local on each node (rac nodes) add. Device name not persistent after reboot of RAC nodes (node. This behavior has caused very serious issues for the OCR and Vote. They don't get mounted.