|TBA||TBA|| Tom Kyte|
|Supercharging Exadata TEMP I/O Performance with ZFSSA and Infiniband||Exadata TEMP I/O performance may become a limiting factor under certain workloads. Examples include joining, aggregating or sorting big amounts of data resulting in a lot of spilling to TEMP. This presentation explores one way to radically improve TEMP I/O performance by leveraging ZFSSA connected to the Exadata machine via Infiniband fabric.|| Alex Fatkulin|
|Hadoop vs. Oracle Database: Fight!||Imagine a world where only one of two can survive - Oracle Database or Hadoop - who comes on top? It's not difficult to imagine the world of data 10 years ago without Hadoop. Can you imagine the world of data without Hadoop now or 10 years from now? What if your only choice of data store today was Hadoop - what would your database deployments look like without Oracle?
While these scenarios are only in our imagination, we will use them to assess how specific data processing needs can be satisfied purely by Oracle or by Hadoop and what are the common and unique challenges. || Alex Gorbachev|
|First 5 Things To Do to Your Exadata After Installation||Many new Exadata customers have the same question after their new toys are configured - "what next?" Drawing from Enkitec's wide experience implementing Exadata, this session will focus on the first five things you should do to your Exadata after installation for top performance. We will cover everything from database configuration best practices to tips and tricks that are sure to make the operating system on Exadata compute nodes more user-friendly.|| Andy Colvin|
|Lessons Learned from a Cloudy Year||As organisations move towards Database as a Service as the preferred model for meeting their demand for databases at all stages of the software development life cycle through to Production, they will face some serious challenges before achieving the perceived benefits. Despite Exadata and other platforms offering great consolidation potential, the surrounding processes and management infrastructure required to support the rapid and flexible deployment models offered by public Cloud providers are significant.
The goal of this presentation is to share some lessons learned with those who might be about to embark on DBaaS and other consolidation projects to balance the undoubtedly powerful but occasionally fluffy messages being delivered by the big vendors.
Although based on real world experiences working on a DBaaS project for a global banking client, the presentation focuses on challenges that are likely to be faced by most large organisations when building their own Private On-Premise Cloud solution. These will include both technical and organisational challenges.|| Doug Burns|
|TBA||TBA|| Eric Sammer|
|Why ILM is More Important on Exadata - Using OLTP, Query High & Archive High Compressions||Developing an Information Lifecycle Management (ILM) strategy is important in any organization, but with Exadata, the importance multiplies due to the opportunities made possible by Hybrid Columnar Compression. These decisions can have a significant impact on storage costs and the amount of data that’s actively available to the organization. In this session we will talk about the importance of partitioning and the different strategies to consider, some of which provide unique and significant benefits to Exadata, as well as the dramatic impact that compression can have on storage and performance.|| Eric Timberlake|
|Think Exa!||Moving applications to Exadata is almost a guaranteed success. Many of these applications are replatformed, in other words move from old hardware that has reached its End of Life and had to be decommissioned. Depending on that platform the move to more powerful processors alone can be a huge performance improvement. Add the fact that you are not sharing your infrastructure with others (or if you do, you can control the amount of sharing) then you understand why most systems see a 3-5 times improvement in performance. The Oracle professional calls this type of migration a “lift and shift” exercise. And for most consolidated projects that might actually be good enough, the benefits lie elsewhere-in license savings for example. But if you want to get the most out of Exadata, then you should review more than just the underlying platform. Many application design patterns that worked well outside of the Exadata world don’t work as well on Exadata. This presentation will show you areas of interest where you want to perform a review and Think Exa! Thousands of converted datafiles of 2 GB in size, a huge SGA, too many buffered reads at the expense of smart scans, heavy use of sub partitioning to reduce I/O requirements come to mind, but also the appropriate use of indexes. || Frits Hoogland, Martin Bach|
|Sneak Peek: Smart SQL Processing Across Hadoop and Exadata||Understanding the future of big data is crucial in the early stages of decision making around big data architectures. This session lays out Oracle’s roadmap and strategy for the enterprise big data platform of the future. Specifically this session will describe Oracle’s vision on enabling query capabilities joining data in both Big Data Appliance (Hadoop) and Exadata. The focus of this session is on the smart SQL features and functions coming to a Big Data Appliance near you in the near term.|| Jean-Pierre Dijcks|
|Lessons Learned from Several Exadata Implementations||This session will cover use cases of how Accenture clients have benefited from Oracle's Engineered Systems. Each Exadata migration represents a different set of challenges based on a customer’s infrastructure. We will take a technical deep dive into a handful of scenarios, review the challenges, explore lessons learnt and highlight the benefits customers are experiencing by adopting the Exadata platform.|| Julian Dontcheff|
|My PX Goes to 11||This paper provides a case study detailing the implementation of extreme parallelism in a very large DW environment. The system was implemented on a RAC cluster hosted on 2 Exadata X3-8's. The application provides in-depth analysis for product performance and shopping behaviors to the retail industry. This presentation combines the business case with the technical hurdles that had to be overcome in order to enable the system to work. One of the biggest challenges was the decision/requirement to not do any pre-aggregation of data, meaning that every report had to hit the raw data. Exadata provided the perfect platform allowing the power of SQL to be applied but still allowing a large portion of the work to be pushed down to the data.|| Kerry Osborne|
|Zero downtime migrations using Golden Gate||Today’s database end users are demanding more and more that their database can’t be down anymore. For day-to-day operations we have the MAA stack that takes care of this but migrations to a new platform are still a challenge. The traditional MAA stack does not work very well when for example you need to do endianess conversion or version upgrades, these migrations take time. In this presentation we will explore how we can use Golden Gate to create a zero-downtime migration for your database. By walking through an example setup you can see how te setup golden gate for zero-downtime migration and how to have an easy fallback scenario. With this walk-through you will find out the trade-offs between Golden Gate and other migration scenarios as well as the pro’s and con’s of using Golden Gate as a migration tool.|| Klaas-Jan Jongsma|
|A Tale of Two File Systems: Data Storage for Physics at CERN & Deep Dive Into Oracle ASM||In the first part of this session, Luca will present an overview of some of the key IT services at CERN for data processing. In particular we will go through the main architectural choices and lessons learned from running a custom storage system of 100 PB and a processing grid running 2 M jobs per day. In the second part of the talk, he will shift to a deep dive into Oracle’s ASM, in particular touching details of the ASM internals, troubleshooting, diagnostic utilities and lessons learned in production.|| Luca Canali|
|Do I Still Need Exadata Now That I Have In-Memory?||Oracle recently announced the upcoming availability of the Oracle Database In-Memory option, a solution for accelerating database-driven business decision-making to real-time. Unlike specialized In-Memory Databases approaches that are restricted to particular workloads or applications, Oracle Database 12c leverages a new in-memory column store format to speed up analytic and OLTP workloads.
Given these announcements and the performance improvements promised by this new functionality is it still necessary to run your Oracle Database on an Exadata environment?
This session explains in detail how Oracle Database In-Memory works and will demonstrate just how much performance improvements you can expect. We will also discuss how it integrates with an Exadata environment and will answer the question once and for all if you still need Exadata once you have In-Memory.
|| Maria Colgan|
|Customer Case Study: OLTP Consolidation on Exadata at DFS||Discover Financial Services (DFS) is a direct banking and payment services company that operates some of the most recognizable brands in financial services including the Discover Card, Discover Network, Pulse Network, and Diners Club International. DFS has consolidated hundreds of OLTP applications onto over a half-dozen Exadata frames with a dedicated eco-system of over 1PB of Oracle ZFS for backup/restore.
This session will cover lessons learned in Exadata database consolidations, field-upgrades of production Exadata systems, an evolution of InfiniBand based backup and restore, and an overview of day-to-day operations of a large Exadata environment.|| Marty Stogsdill|
|Customer Case Study: Using Oracle Exadata in Cancer Research||The University of Texas MD Anderson Cancer Center has partnered with the Oracle Health Sciences GBU to implement and enhance Oracle Healthcare Data Warehouse Foundation (HDWF) and Oracle Health Sciences Translational Research Center (including Cohort Explorer and Omics Data Bank). The institution runs these products on Oracle Exadata as part of a customized end-to-end institutional solution for clinical and research data delivery. This session covers the recent migration of HDWF to Exadata x4 racks and also explores details of the architectural components, including genomic and NLP pipelines. You'll hear about MD Anderson's unique implementation of these technologies that will enable novel and personalized treatments in the battle against cancer.|| Nicholas Collins|
|Exadata SmartScan Deep Dive||SmartScan lies at the very heart of Exadata’s performance advantage. After reviewing what SmartScan is and how it works we’ll take a deeper dive looking at: 1. What is and isn’t offloaded and why (and how that has changed over the various releases.) 2. The parallelism model and how SmartScan interacts with PQ. 3. How SmartScan works with ASM, Storage Indexes, and Zone Maps. 4. The benefits of columnar processing on Exadata and the centrality of SmartScan to HCC. 5. How to read the AWR stats for SmartScan and HCC. 6. How to isolate problems with SmartScan and the various parameters available to help diagnose issues.|| Roger MacNicol|
|Exadata Resource Manager Deep Dive||Resource Manager has become a critical feature on Exadata due to the popularity of consolidation. Whether you use server consolidation, schema consolidation, or 12c multi-tenancy, Resource Manager can help achieve your performance targets and protect against misbehaving workloads. This session starts with a detailed look at monitoring critical resources such as CPU, memory, disk, and flash by database or workload to understand actual resource utilization and contention issues. We will show some simple yet effective ways of using Resource Manager for consolidation. We will then take a deep dive into I/O Resource Manager to show how it can manage disk I/O and flash contention on Exadata. || Sue Lee|
|Using Hadoop for Oracle Data Warehouse Active Archiving||In this session we will go through some examples and case studies of active archiving old fact table data off the traditional Oracle data warehouse RDBMS to a Hadoop cluster. The archived data can be compressed and is accessible via the query processing engine of choice, like Hive or Cloudera Impala. You can even keep using the existing SQL queries and BI tools via Oracle's ODBC connectivity.|| Tanel Poder|
|Exadata vs. Taradata in the Healthcare Industry||This session will recap a Proof of Concept where 40TB database was migrated from Teradata to Exadata so that a head to head comparison could be done. The application at the center of the test was Epic’s Clarity Data Warehouse which is in use at many, large healthcare organizations in the US.
We will discuss the following aspects of this test: architecture comparison, data migration/conversion of 40TB, test execution and recording results and unique challenges with very large databases. The results of this Proof of Concept are truly staggering. To find out who won, and by how much, come to this session. This is a “must see.”|| Timothy Fox|
|High Throughput Computing on Exadata||While a lot of the buzz about Exadata is around fast queries, that's only one component of performance. There are often many other challenges to address, including data movement, data loading, data aggregation, statistics gathering. In this session I will address the end-to-end challenges I've seen in leading Proof of Value exercises with customers. Content will include design-patterns, code examples, examples of different tools used to find bottlenecks, and if all goes well, a live example or two.
|| Tyler Muth|