|Baking Big Data Batches: Fan-Out Parallel Query and Big Data SQL||Orchestrating efficient parallel scan of multiple datastores as part of a single query is complicated. For each query, a balance must be struck between synchronized, serial control from the query coordinator and asynchronous parallel behavior from the remote systems. Oracle Big Data SQL provides a novel solution to this problem which provides a unique ability to orchestrate fan-out parallelism on query which simultaneously touch Oracle Exadata and Hadoop. In this talk, we explore changes to Oracle Parallel Query necessary to accomplish this parallelism, and discuss future improvements to both the control and execution of queries spanning Oracle Database and data source ranging from Hadoop to NoSQL sources.|| Dan McClary|
|Oracle Exadata and database memory||This session is an in-depth look into the memory usage of the Oracle database. Proper use of operating system memory facilities is key for performance on the Oracle Exadata platform, but not limited to that. At Enkitec we have seen a lot of incorrectly configured memory settings on both the operating system layer and the database, sometimes with drastic performance implications as a result. Understanding how the database manages memory becomes even more important with database consolidation, which is regularly seen on Exadata.
The session details the memory settings of the operating system layer (Linux, as used on Exadata). It also explains the differences of the database private and shared memory areas, and the different memory models of the Oracle database (AMM, ASMM, manual).
Everything is presented with clear method of diagnosis, which can be reused by the attendee.
|| Frits Hoogland|
|Fill the Glass Live!||Cary Millsap hosts Alex Fatkulin, Hank Tullis, and Jason Arneil in a conversation about Oracle engineered systems.|| Hank Tullis, Jason Arneil, Alex Fatkulin, Cary Millsap|
|Real World Experience with the (O)VCA||This presentation will cover James' real world experience with a large VCA based project. The project encompasses the deployment of 3 VCA units - each with 12+ compute nodes. ZFS ZS-32 storage is used as the underlying storage component.
In the presentation, James will discuss the issues surrounding VCA deployment including Windows VM specifics, external storage presents (using FC, iSCSI, NFS), tape storage and backup issues, OVM Manager, EM12c integration, patching and new node inclusion, network issues (including firewalls and routing) as well as the tools used and benchmarks we created for storage, network and compute aspects. He will discuss both the positives and the negatives of the solution which covers several hundred VMs for Linux and Windows.
James will also discuss the benefits we encountered for specific tuning using ZFS and how it actually is a very good storage platform for consolidation. || James Anthony|
|Exadata Capacity Planning||Exadata was first announced in 2008 and its presence is growing every year, with most of the mid to large size organizations have it. It is very common to see that either applications do not make good use of its capabilities or abuse resources to an extent that it negatively impacts application performance. E.g. Flash being underutilized or HDD IOPS/Throughput reaching to its limit causing delay at application layer or compute node reaching 100% CPU due to excessive parallelism etc. This presentation will focus on gathering existing Exadata utilization statistics (from OEM repository) like compute node/storage cell CPU utilization, HDD/FLASH IOPS/Throughput/Response time. We will also discuss how to leverage this data to measure if you are under-utilizing your Exadata or reaching to its limit and what should be done to make best use of it.|| Kapil Goyal|
|What does the future hold for Exadata?||Since its introduction at Oracle Open World 2008, the Oracle Exadata Database Machine has been the platform of choice for those looking to achieve the highest performing and most available Oracle Databases.
Have you wondered what technologies can find their way into an Exadata? What is the engineering team thinking about?
In this presentation, we will introduce new technologies in storage such as persistent memory, networking technologies, security strategies in the Cloud, and breakthrough systems technologies that can potentially find their way into an Exadata.
|| Kodi Umamageswaran|
|A Deep Dive into HCC Mechanics & Internals||HCC is a much hyped Oracle feature. Unfortunately, there are still too many misconceptions about HCC circulating on Oracle forums and on blogs. This talk is about demystifying these. The audience will learn about the concepts of HCC as well as the other Oracle compression algorithms (basic/OLTP) before diving into the HCC-specific compression algorithms and their implementation. We will use block dumps to show the structure of the so-called Compression Unit and how to decode it.
Useful tips from HCC implementations with regards to lifecycle management (ILM) including the new 12c Automatic Data Optimisation conclude this talk.|| Martin Bach|
|Database Health-Check: A healthy step before migrating into Exadata||Exadata is often used as a "consolidation" environment, where dozens of databases are re-platformed together for diverse reasons. Re-locating several databases could be as simple as restoring full backups. But it can also be used as an opportunity to fix old concerns, and in some cases unknown risks.
When a client wants to perform a "cleanup" of known and unknown issues of a few databases, maybe as a preparation for a re-platform into a new Engineered system, a methodical and unbiased "database health-check" can be of great benefit.
What exactly is a database health-check and what is usually reviewed? This session explains the diverse components of a practical database health-check, and how one can unearth unexpected concerns. Common findings are presented, as well as their remedies.
Sometimes the major challenge for a DBA is to know what he/she does not know about a database. The expected audience for this session is managers and database administrators who want to be proactive with regard to their databases.|| Mauro Pagano|
|Super Cluster: the SWISS army knife of the Engineered Systems||This session will describe what was learned by moving from 3 M9000 s to 2 Super Clusters.
The last couple of years lots of Engineered systems saw the light the Sparc Super Cluster (SSC) is one of them.
It offers lots of flexibility and ways to consolidate your current Solaris environment at your own pace to Exadata Storage Cell powered SPARC hardware and protects your investments.
This talk will give a short overview of the line-up but focuses on the technology used and the steps taken to consolidate 3 M9000's onto 2 Super Clusters.
Lots of Oracle Database technologies were used in this project:
RAC One and Data Guard,
using RAT to testdrive,
DBRM, instance Caging and IORM to make consolidation work,
Entreprise Manager 12c and the exadata plugging.
It will give a brief overview of what the Super Cluster is and why it is a great solution if you are a Solaris shop and want a flexible solution with the power Exadata Storage Cells.
It will go over the issues encountered and how they were resolved, it will describe the stages of the project and give an overview of the technologies used ; Rac One , Data Guard , Real Application Testing, Advanced Compression , Resource Manager, IORM. It will describe how the infiniband network and zones are working together.
If you want to know how the Oracle Super Cluster can make a difference for you and your organization, how to avoid the pitfalls and potential issues then this session is for you.
|| Philippe Fierens|
|Where’d My Resources Go? Capacity Planning at Exa-Scale||Oracle Exadata Database Machine is designed to significantly improve performance of OLTP, Data Warehouse, and mixed load applications. Managing CPU, Memory, and IO resources is key for good performance. Exadata engineered systems provide a large number of resources and a wealth of metrics to describe the utilization of those resources. But how does one rationalize 300+ metrics into a story about current utilization and forecasted capacity? This session will describe how we solved this problem for a Fortune 500 bank. We will go over how to organize and aggregate metrics to tell a capacity story. We will discuss methods to forecast future utilization in the presence of noisy data, environment changes (patches, incidents, downtime), and future business forecasts. Unlike most forecasts, this method predicts a distribution of outcomes so one can look at not only the average case, but also outcomes with a 5% probability. We will wrap up with a novel dashboard view of an Exadata system that can show a manager the current and 6-month projected utilization at a system level while allowing a performance tuner to delve down into the details.|| Rishi Khan|
|SmartScan Deep Dive||A top-to-bottom look at how “Table Access Full” works in the context of SmartScan including recent developments in how buffer cache and direct read work together, the introduction of “offload servers”, how SmartScan works with DBIM, and new features in 188.8.131.52. We will also be looking at SmartScan changes for the X5, the All Extreme-Flash configuration, and Big-Data SQL. This talk finishes with notes on how to triage issues with SmartScan in the field.|| Roger MacNicol|
|Tips and Tricks for Successful Consolidation on Exadata||How do you guarantee CPU or flash space for a particular database? Or keep one database's heavy scan workload from slowing down a critical OLTP database? Or ensure one database's excessive PGA usage doesn't destabilize another? In this session, we will guide you through the best way to use Resource Manager for both departmental and cloud consolidation, focusing on the features added in the latest Exadata release.|| Sue Lee|
|Hybrid World||Kerry Osborne and Tanel Poder discuss the the future of Oracle and Big Data Hadoop architecture and how to take advantage of the new architecture with your existing Oracle database.|| Tanel Poder, Kerry Osborne|
|In-Memory and Super Cluster||One of the most significant changes in Oracle Database release 184.108.40.206 is the In-Memory Option. This session will start off with a few basic concepts of In-Memory execution, then quickly move on to experience from recent customer engagements with large data sets on Oracle SuperCluster. Some of the questions that will be addressed are: Where will you see the biggest improvement from In-Memory execution and where will you see the least improvement? What are the prerequisites as well as some disqualifiers for In-Memory Vector Transformation? What are some unique advantages of the SuperCluster in the context of In-Memory performance? What are the most common deployment scenarios where customers are choosing SuperCluster? || Tyler Muth|