|Exadata Storage Grid Resiliency||The session will focus on the Exadata Storage Grid resiliency on both disk and storage cell levels. Depending on the disk group redundancy various Exadata storage components can fail before the actual data loss or corruption occurs. However when dealing with multiple storage components failures in a short time span it is important to understand what kind of impact it can have on the entire storage grid.|| Alex Fatkulin|
|Baking Big Data Batches: Fan-Out Parallel Query and Big Data SQL||Orchestrating efficient parallel scan of multiple datastores as part of a single query is complicated. For each query, a balance must be struck between synchronized, serial control from the query coordinator and asynchronous parallel behavior from the remote systems. Oracle Big Data SQL provides a novel solution to this problem which provides a unique ability to orchestrate fan-out parallelism on query which simultaneously touch Oracle Exadata and Hadoop. In this talk, we explore changes to Oracle Parallel Query necessary to accomplish this parallelism, and discuss future improvements to both the control and execution of queries spanning Oracle Database and data source ranging from Hadoop to NoSQL sources.|| Dan McClary|
|Oracle Exadata and database memory||This session is an in-depth look into the memory usage of the Oracle database. Proper use of operating system memory facilities is key for performance on the Oracle Exadata platform, but not limited to that. At Enkitec we have seen a lot of incorrectly configured memory settings on both the operating system layer and the database, sometimes with drastic performance implications as a result. Understanding how the database manages memory becomes even more important with database consolidation, which is regularly seen on Exadata.
The session details the memory settings of the operating system layer (Linux, as used on Exadata). It also explains the differences of the database private and shared memory areas, and the different memory models of the Oracle database (AMM, ASMM, manual).
Everything is presented with clear method of diagnosis, which can be reused by the attendee.
|| Frits Hoogland|
|Engineered Systems and Private Clouds - Easy Right?||There is a lot of hype about clouds and there is a a lot of hype about engineered systems. But, is it easy if we combine them? Can a company or business unit cut corners when deploying a private cloud on engineered systems, such as Exadata and Exalogic?
Also, what steps are needed for defining and deploying a successful private cloud? What are some of the best practices as well as lessons learned in deploying private clouds?
This discussion provides an approach to engaging in a private cloud implementation, including 5 things to start your cloud. This discussion also discusses the concept of creating factories for deployment and migrations. These factories provide efficiency and agility with cloud component deployments. These efficiencies directly correlate to reductions in costs and risks surrounding the configuration and management of a cloud deployment.|| Hank Tullis|
|Real World Experience with the (O)VCA||This presentation will cover my real world experience with a large VCA based project. The project encompasses the deployment of 3 VCA units - each with 12+ compute nodes. ZFS ZS-32 storage is used as the underlying storage component.
In the presentation I will discuss the issues surrounding VCA deployment including Windows VM specifics, external storage presents (using FC, iSCSI, NFS), tape storage and backup issues, OVM Manager, EM12c integration, patching and new node inclusion, network issues (including firewalls and routing) as well as the tools used and benchmarks we created for storage, network and compute aspects. We will discuss both the positives and the negatives of the solution which covers several hundred VMs for Linux and Windows.
I will also discuss the benefits we encountered for specific tuning using ZFS and how it actually is a very good storage platform for consolidation. || James Anthony|
|DBaaS at Scale: Engineered Systems Consolidation in Banking||This presentation will show the design decisions taken and the engineering work that was required for a large scale DBaaS implementation project at a global investment bank.
The bank had decided to consolidate it’s entire oracle estate of several thousand databases onto a collection of Exadata racks and ODA’s. This presentation will focus on the engineering challenges that were faced in ensuring these new engineered systems were production ready for use within the bank.
After attending this presentation attendees will have an idea of what design decisions are required when performing an engineered systems consolidation at scale and will be better placed to avoid some of the pitfalls when using engineered systems for your consolidation project.|| Jason Arneil|
|Modern Infrastructure Science: Old Doctrines, New Doctrines||Aside from the all the cultural, financial and vendor disruption, the doctrines that underlie what Google, Yahoo and Facebook initiated (resulting in things like Accumulo, Hadoop and Hive and Open Compute) was to completely deconstruct everything about doing computing at scale and with it, came a different set of developmental and operational doctrines that were driven by the business and less by vendors and they way it had always been done.|| Jeff Needham|
|Bloom Filters for DBAs||
The presentation covers the internals of Bloom filters, a powerful mechanism used in Exadata to pre-join tables at the storage server level and thus reduce the volumes of data the Oracle database servers need to process. Bloom filters can significantly reduce the execution times of queries, especially when the 12c in-memory option is enabled. A Bloom filter is a probabilistic algorithm for doing existence tests in less memory than a full list of keys would require. In other words, a Bloom filter is a method for representing a set of n elements (also called keys) to support membership queries. The presentation covers all situations where the Oracle database makes use of Bloom filters.|| Julian Dontcheff|
|Exadata Capacity Planning||First Exadata was announced in 2008 and its presence is growing every year, now a day's most of the mid to large size organizations have it. It is very common to see that either applications do not make good use of its capabilities or abuse resources to an extent that it negatively impacts application performance. E.g. Flash being underutilized or HDD IOPS/Throughput reaching to its limit causing delay at application layer or compute node reaching 100% CPU due to excessive parallelism etc. This presentation will focus on gathering existing Exadata utilization statistics (from OEM repository) like compute node/storage cell CPU utilization, HDD/FLASH IOPS/Throughput/Response time. Also how to leverage this data to measure if you are under utilizing your Exadata or reaching to its limit and what should be done to make best use of it.|| Kapil Goyal|
|TBA||TBA|| Kerry Osborne|
|TBA ||TBA|| Kodi Umamageswaran|
|A Deep Dive into HCC Mechanics & Internals||HCC is a much hyped Oracle feature. Unfortunately, there are still too many misconceptions about HCC circulating on Oracle forums and on blogs. This talk is about demystifying these. The audience will learn about the concepts of HCC as well as the other Oracle compression algorithms (basic/OLTP) before diving into the HCC-specific compression algorithms and their implementation. We will use block dumps to show the structure of the so-called Compression Unit and how to decode it.
Useful tips from HCC implementations with regards to lifecycle management (ILM) including the new 12c Automatic Data Optimisation conclude this talk.|| Martin Bach|
|Database Health-Check: A healthy step before migrating into Exadata||Exadata is often used as a "consolidation" environment, where dozens of databases are re-platformed together for diverse resons. Re-locating several databases could be as simple as restoring full backups. But it can also be used as an opportunity to fix old concerns, and in some cases unknown risks.
When a client wants to perform a "cleanup" of known and unknown issues of a few databases, maybe as a preparation for a re-platform into a new Engineered system, a methodical and unbias "database health-check" can be of great benefit.
What exactly is a database health-check and what is usually reviewed? This session explains the diverse components of a practical database health-check, and how one can unearth unexpected concerns. Common findings are presented, as well as their remedies.
Sometimes the major challenge for a DBA is to know what he/she does not know about a database. The expected audience for this session is managers and database administrators who want to be proactive with regard to their databases.|| Mauro Pagano|
|Super Cluster : the SWISS army knife of the Engineered Systems||This session will describe what was learned by moving from 3 M9000 s to 2 Super Clusters.
The last couple of years lots of Engineered systems saw the light the Sparc Super Cluster (SSC) is one of them.
It offers lots of flexibility and ways to consolidate your current Solaris environment at your own pace to Exadata Storage Cell powered SPARC hardware and protects your investments
This talk will give a short overview of the line-up but focusses on the technology used and the steps taken to consolidate 3 M9000's onto 2 Super Clusters.
Lots of Oracle Database technologies were be used in this project :
RAC One and Data Guard,
using RAT to testdrive,
DBRM, instance Caging and IORM to make consolidation work
Entreprise Manager 12c and the exadata plugging
It will give a brief overview of what the Super Cluster is and why it is a great solution if you are a Solaris shop and want a flexible solution with the power Exadata Storage Cells.
It will go over the issues encountered and how they were resolved, it will describe the stages of the project and give an overview of the technologies used ; Rac One , Data Guard , Real Application Testing, Advanced Compression , Resource Manager, IORM. It will describe how the infiniband network and zones are working together.
If you want to know how the Oracle Super Cluster can make a difference for you and your organisation, how to avoid the pitfalls and potential issues then this session is for you.
|| Philippe Fierens|
|Where’d My Resources Go? Capacity Planning at Exa-Scale||Oracle Exadata Database Machine is designed to significantly improve performance of OLTP, Data Warehouse, and mixed load applications. Managing CPU, Memory, and IO resources is key for good performance. Exadata engineered systems provide a large number of resources and a wealth of metrics to describe the utilization of those resources. But how does one rationalize 300+ metrics into a story about current utilization and forecasted capacity? This session will describe how we solved this problem for a Fortune 5 bank. We will go over how to organize and aggregate metrics to tell a capacity story. We will discuss methods to forecast future utilization in the presence of noisy data, environment changes (patches, incidents, downtime), and future business forecasts. Unlike most forecasts, this method predicts a distribution of outcomes so one can look at not only the average case, but also outcomes with a 5% probability. We will wrap up with a novel dashboard view of an Exadata system that can show a manager the current and 6-month projected utilization at a system level while allowing a performance tuner to delve down into the details.|| Rishi Khan|
|SmartScan Deep Dive||A top-to-bottom look at how “Table Access Full” works in the context of SmartScan including recent developments in how buffer cache and direct read work together, the introduction of “offload servers”, how SmartScan works with DBIM, and new features in 188.8.131.52. We will also be looking at SmartScan changes for the X5, the All Extreme-Flash configuration, and Big-Data SQL. This talk finishes with notes on how to triage issues with SmartScan in the field.|| Roger MacNicol|
|Tips and Tricks for Successful Consolidation on Exadata||How do you guarantee CPU or flash space for a particular database? Or keep one database's heavy scan workload from slowing down a critical OLTP database? Or ensure one database's excessive PGA usage doesn't destabilize another? In this session, we will guide you through the best way to use Resource Manager for both departmental and cloud consolidation, focusing on the features added in the latest Exadata release.|| Sue Lee|
|TBA||TBA|| Tanel Poder|
|In-Memory and Super Cluster||One of the most significant changes in Oracle Database release 184.108.40.206 is the In-Memory Option. This session will start off with a few basic concepts of In-Memory execution, then quickly move on to experience from recent customer engagements with large data sets on Oracle SuperCluster. Some of the questions that will be addressed are: Where will you see the biggest improvement from In-Memory execution and where will you see the least improvement? What are the prerequisites as well as some disqualifiers for In-Memory Vector Transformation? What are some unique advantages of the SuperCluster in the context of In-Memory performance? What are the most common deployment scenarios where customers are choosing SuperCluster? || Tyler Muth|