700 likes | 1.03k Views
DB-16: In Any Case, the Devil's in the DataServer Details. Agenda. A StoryTerminology
E N D
1. DB-16: In Any Case, the Devil’s in the DataServer Details That which we persist in doing becomes easier,
not that the task itself has become easier,
but that our ability to perform it has improved.
Ralph Waldo Emerson (1803 - 1882)
Hello
This is session ….
Quote = relevent =>
spent long time persisting @ ABL-> SQL
Task NOT easier: ABL-> SQL still different
Improved = ability to find relationships for mapping them and then providing those improvements to you
I ’m
DJM *
Principal with OE div
13 years doing OE Dev
Currently in Srvr Prod Grp (WS, AppServer, AIA, WSA & last^least DS) + I architect DS product features & lead a DS team of devs
Done everything except invent
Hello
This is session ….
Quote = relevent =>
spent long time persisting @ ABL-> SQL
Task NOT easier: ABL-> SQL still different
Improved = ability to find relationships for mapping them and then providing those improvements to you
2. DB-16: In Any Case, the Devil’s in the DataServer Details Agenda A Story
Terminology & Technology Sync-up
The Devils in the Details
Case Studies
ABL & DataServer
ABL & the RDBMS
DataServer & RDBMS
DataServer & API Access Drivers
DataServer Performance
Summary & Questions Since I^invent => need start w/story about invent – (joke: if story => everything after that can be about me)
Story will MERGE INTO a discussion about Terms & Technology
Laying the ground-work for case studies:
demonstrate component interoperability loosely starting in Language extending outward to the DataServer, access and Database
Then, Performance discussion – Just Because … performance is always “the” topic of interest
Wrap-up & Questions
Since I^invent => need start w/story about invent – (joke: if story => everything after that can be about me)
Story will MERGE INTO a discussion about Terms & Technology
Laying the ground-work for case studies:
demonstrate component interoperability loosely starting in Language extending outward to the DataServer, access and Database
Then, Performance discussion – Just Because … performance is always “the” topic of interest
Wrap-up & Questions
3. DB-16: In Any Case, the Devil’s in the DataServer Details A Story Mary and the three DataServers This is the story of …This is the story of …
4. DB-16: In Any Case, the Devil’s in the DataServer Details Mary meets Compiler Know Mary? Foremost Arch. Of OpenEdge Language
The story begins …
It’s the timeless tale of girl meets compiler, develops an amazing language for rapid application development and then longs to share it with the world.Know Mary? Foremost Arch. Of OpenEdge Language
The story begins …
It’s the timeless tale of girl meets compiler, develops an amazing language for rapid application development and then longs to share it with the world.
5. DB-16: In Any Case, the Devil’s in the DataServer Details Client meets Server Mary’s language was multidimensional, both procedural and object-oriented. But it was the highly-focused OpenEdge database that proved to be its perfect host and partner.
Ah…, they enjoyed a special harmony, a unique synergy, an acquired taste for application complexities.
Know Gus? Foremost Arch. Of OpenEdge Landscape+, preeminent authority on DBs (and by extension DS).
Mary’s language was multidimensional, both procedural and object-oriented. But it was the highly-focused OpenEdge database that proved to be its perfect host and partner.
Ah…, they enjoyed a special harmony, a unique synergy, an acquired taste for application complexities.
Know Gus? Foremost Arch. Of OpenEdge Landscape+, preeminent authority on DBs (and by extension DS).
6. DB-16: In Any Case, the Devil’s in the DataServer Details Where’s the love ? Where’s my pie ? Suddenly, villianous characters (that need no introduction) usurp the stage:
Seeking recognition, a piece of pie, their own turf, their “fair” share.Suddenly, villianous characters (that need no introduction) usurp the stage:
Seeking recognition, a piece of pie, their own turf, their “fair” share.
7. DB-16: In Any Case, the Devil’s in the DataServer Details For Each What ? And, Mary knew full well, these hoodlems, why they’d be forever unresponsive to her elegant language.And, Mary knew full well, these hoodlems, why they’d be forever unresponsive to her elegant language.
8. DB-16: In Any Case, the Devil’s in the DataServer Details Hmmm…. So, she bargains with the devil to make something good come from their demands. Certainly she could find they had “some” redeeming qualities that bore a resemblance to her old, reliable partner.So, she bargains with the devil to make something good come from their demands. Certainly she could find they had “some” redeeming qualities that bore a resemblance to her old, reliable partner.
9. DB-16: In Any Case, the Devil’s in the DataServer Details I’ve got Bingo ! “Eureka” she cried out ! Just re-invent these antagonists in the “image” of our story’s protagonist, she thought.
And disguise them by cloaking them in her partner’s garmentry.“Eureka” she cried out ! Just re-invent these antagonists in the “image” of our story’s protagonist, she thought.
And disguise them by cloaking them in her partner’s garmentry.
10. DB-16: In Any Case, the Devil’s in the DataServer Details Trusted friends To pull it off, she’d need to enlist the help of some old pals.
So an APB was put out on our suspects and soon their whereabouts were collected.To pull it off, she’d need to enlist the help of some old pals.
So an APB was put out on our suspects and soon their whereabouts were collected.
11. DB-16: In Any Case, the Devil’s in the DataServer Details Mary makes a house call Armed with “disguised” images and “real” coordinates, Mary translates her language through images of Larry, Bill, Sam, (and others) and manages to get their cooperation after all.
BTW: That “cooperation” has been one of the moving targets in DataServer development ever since. Armed with “disguised” images and “real” coordinates, Mary translates her language through images of Larry, Bill, Sam, (and others) and manages to get their cooperation after all.
BTW: That “cooperation” has been one of the moving targets in DataServer development ever since.
12. DB-16: In Any Case, the Devil’s in the DataServer Details Bravo ! Take a bow Mary. While your off inventing new ways to share your language with the world, your DataServer legacy lives on and evolves with the times.Take a bow Mary. While your off inventing new ways to share your language with the world, your DataServer legacy lives on and evolves with the times.
13. DB-16: In Any Case, the Devil’s in the DataServer Details The End And they all lived happily ever after in the OpenEdge house that Joe built.
The End.
The moral of our story:
Mary understood then what we all understand now => “Villians are lurking out there”
OpenEdge needs to remain “Open”.
Continue to extend the hand of cooperation =>
turn more “villians” into friendsAnd they all lived happily ever after in the OpenEdge house that Joe built.
The End.
The moral of our story:
Mary understood then what we all understand now => “Villians are lurking out there”
OpenEdge needs to remain “Open”.
Continue to extend the hand of cooperation =>
turn more “villians” into friends
14. DB-16: In Any Case, the Devil’s in the DataServer Details That was then … So, where do all these actors fit in the DS picture today ?So, where do all these actors fit in the DS picture today ?
15. DB-16: In Any Case, the Devil’s in the DataServer Details Terminology & Technology Sync-up Here they are in their natural habitat.
The Client (broad sense=GUI, services, client logic, etc) interoperates w/schema holder (left) and the DataServer (below it)
DataServer interoperates with the client (above it) and the API (all the way left, bottom)
The API provides access to the SQL database (above it)Here they are in their natural habitat.
The Client (broad sense=GUI, services, client logic, etc) interoperates w/schema holder (left) and the DataServer (below it)
DataServer interoperates with the client (above it) and the API (all the way left, bottom)
The API provides access to the SQL database (above it)
16. DB-16: In Any Case, the Devil’s in the DataServer Details Terminology & Technology Sync-up Now, Add = Optional broker/server component for client/server DataServer processing using Progress networking
Now, Add = Optional broker/server component for client/server DataServer processing using Progress networking
17. DB-16: In Any Case, the Devil’s in the DataServer Details Terminology & Technology Sync-up Added = Required dictionary sub-component of the schema holder and of the foreign database.Added = Required dictionary sub-component of the schema holder and of the foreign database.
18. DB-16: In Any Case, the Devil’s in the DataServer Details Terminology & Technology Sync-up The OpenEdge Dictionary subcomponent is designed to manage both
OpenEdge Databases &
schema holders
OpenEdge DB apps – operate on data & schema – represented by BLUE bar + data
DS apps - operate on schema only – See RED bar
So, DB & DS shared a dependency on schema for compile-time referencing
Notice: “CHAR” in the schema “image” not equal “VARCHAR” or “VARCHAR2” in the foreign schema
Instead: “CHAR” is shared amongst all OpenEdge applications
Difference: “CHAR” describes data for OpenEdge DB apps
“CHAR” describes schema “image” for DS apps.
NOTE: Yes, “We” the DS do know about VARCHAR/VARCHAR2
But “we” the OpenEdge app. do not.The OpenEdge Dictionary subcomponent is designed to manage both
OpenEdge Databases &
schema holders
OpenEdge DB apps – operate on data & schema – represented by BLUE bar + data
DS apps - operate on schema only – See RED bar
So, DB & DS shared a dependency on schema for compile-time referencing
Notice: “CHAR” in the schema “image” not equal “VARCHAR” or “VARCHAR2” in the foreign schema
Instead: “CHAR” is shared amongst all OpenEdge applications
Difference: “CHAR” describes data for OpenEdge DB apps
“CHAR” describes schema “image” for DS apps.
NOTE: Yes, “We” the DS do know about VARCHAR/VARCHAR2
But “we” the OpenEdge app. do not.
19. DB-16: In Any Case, the Devil’s in the DataServer Details Terminology & Technology Sync-up Let’s speculate:
name “item-no” for an INT column in OpenEdge DB & name conforms to OpenEdge naming conventions
Field would need to be renamed “item_no” in a Foreign DS to conform to SQL naming standards
By mapping <dash> (from our image) to <underscore> (in Foreign DS name), we translate into SQL on behalf of the ABL.
“image” concept – has become recurring theme
NOTE - 3 implications to think about:
The ABL can compile using just a schema “image” but,through DataServer, can operate on foreign DB.
2. Needn’t connect Foreign DS to produce r-code that will run against Foreign DS
3. Since schema image contains no data, the schema holder connection (for DS’s) can be–ROLet’s speculate:
name “item-no” for an INT column in OpenEdge DB & name conforms to OpenEdge naming conventions
Field would need to be renamed “item_no” in a Foreign DS to conform to SQL naming standards
By mapping <dash> (from our image) to <underscore> (in Foreign DS name), we translate into SQL on behalf of the ABL.
“image” concept – has become recurring theme
NOTE - 3 implications to think about:
The ABL can compile using just a schema “image” but,through DataServer, can operate on foreign DB.
2. Needn’t connect Foreign DS to produce r-code that will run against Foreign DS
3. Since schema image contains no data, the schema holder connection (for DS’s) can be–RO
20. DB-16: In Any Case, the Devil’s in the DataServer Details Sync-up: 4-Tier Perspective 4 stages (tiers) for our DS show, one for:
Mary
Gus
Mary’s enlisted friend
our villians (Mary’s new friends)
Notice: Mary’s friend (the API layer) + optional DataServer Broker/Server - Only 2 actors that MUST share a stage.4 stages (tiers) for our DS show, one for:
Mary
Gus
Mary’s enlisted friend
our villians (Mary’s new friends)
Notice: Mary’s friend (the API layer) + optional DataServer Broker/Server - Only 2 actors that MUST share a stage.
21. DB-16: In Any Case, the Devil’s in the DataServer Details Sync-up: Self-Service Client 3-Tier Perspective “Self Service” mode = All we’ve done is remove optional DataServer Broker/Server component
Notice 2 things about this:
In this configuration = DataServer Server component is fully embedded in the OpenEdge Client
Consequently, by extrapolation, Client (Mary) now inherits the requirement that the DataServer share a stage with the API layer.“Self Service” mode = All we’ve done is remove optional DataServer Broker/Server component
Notice 2 things about this:
In this configuration = DataServer Server component is fully embedded in the OpenEdge Client
Consequently, by extrapolation, Client (Mary) now inherits the requirement that the DataServer share a stage with the API layer.
22. DB-16: In Any Case, the Devil’s in the DataServer Details Sync-up: Application-centric Notice: In all OpenEdge technology – Federated DB requests/S.H. normalizes access
ABL = driving force.
ABL “designed” to make database requests to a “federated” layer
All in federation are asked to fulfill the same ABL requests – no matter who wants it
Schema image normalizes dissimilarities to meet the ABL’s demand for sameness.
So, DataServers are just another “service provider” in the “federation” of servers (OE/MSS/ORA/ODBC included)
NOTE:
Most Shortcomings of any federated layer (or “server”) are apologized via ABL’s DBRESTRICTIONS function
Most Server nuances that require explanation => in User Guides.Notice: In all OpenEdge technology – Federated DB requests/S.H. normalizes access
ABL = driving force.
ABL “designed” to make database requests to a “federated” layer
All in federation are asked to fulfill the same ABL requests – no matter who wants it
Schema image normalizes dissimilarities to meet the ABL’s demand for sameness.
So, DataServers are just another “service provider” in the “federation” of servers (OE/MSS/ORA/ODBC included)
NOTE:
Most Shortcomings of any federated layer (or “server”) are apologized via ABL’s DBRESTRICTIONS function
Most Server nuances that require explanation => in User Guides.
23. DB-16: In Any Case, the Devil’s in the DataServer Details Sync-up: Client-Server Perspective Client/Server perspective…
The ABL Client is served by a “federation” of “providers“ (servers).
Those “providers” then become clients to a looser “standards-based” federation of “SQL providers” (servers).
Joke: Yes, the DataServer is not only the federation president, he’s also an API client (hair club slogan spin)
Thus: DataServer is a server to the OpenEdge client
But a client to the API and SQL databasesClient/Server perspective…
The ABL Client is served by a “federation” of “providers“ (servers).
Those “providers” then become clients to a looser “standards-based” federation of “SQL providers” (servers).
Joke: Yes, the DataServer is not only the federation president, he’s also an API client (hair club slogan spin)
Thus: DataServer is a server to the OpenEdge client
But a client to the API and SQL databases
24. DB-16: In Any Case, the Devil’s in the DataServer Details Sync-up: Component Perspective Finally, let’s look at configuration from the “Component” perspective…
When examining Interoperability = best described w/DataServer’s componentry.
Component configuration: Needs to show DataServer as “stretched” between two components
DataServer = member of Language component BUT has dependencies on the Access component
Remember: DataServer =
Server to the ABL
Client to the APIFinally, let’s look at configuration from the “Component” perspective…
When examining Interoperability = best described w/DataServer’s componentry.
Component configuration: Needs to show DataServer as “stretched” between two components
DataServer = member of Language component BUT has dependencies on the Access component
Remember: DataServer =
Server to the ABL
Client to the API
25. DB-16: In Any Case, the Devil’s in the DataServer Details Sync-up: DataServer Component Layers “layering” or “stacking” the components: (“top” is “bottom” ?)
The Language (in blue)
___
Top = ABL
Next = “stretched” DataServer – bridges the gap between ABL and access component
----
Next = DB Access component –or- API layer
Last = SQL Database
Now let’s crack the “case” on some case studies involving these components …
“layering” or “stacking” the components: (“top” is “bottom” ?)
The Language (in blue)
___
Top = ABL
Next = “stretched” DataServer – bridges the gap between ABL and access component
----
Next = DB Access component –or- API layer
Last = SQL Database
Now let’s crack the “case” on some case studies involving these components …
26. DB-16: In Any Case, the Devil’s in the DataServer Details OpenEdge Reference ArchitectureFor Service Oriented Business Applications DataServers “fit” into the OERA:
OpenEdge Component - consisting of the Client & DataServer layers
DataServers “fit” into the OERA:
OpenEdge Component - consisting of the Client & DataServer layers
27. DB-16: In Any Case, the Devil’s in the DataServer Details Case Study: Components Case studies will traverse in loose order the path:
ABL -> DS -> API -> Foreign DS
Almost all have dimensions of other components.
Case Study Selection Criteria:
Recent is better (keep it fresh)
Points for “timeliness”
Exemplary behavior for component or component set
NOTE: Just about any sampling: Shows a kind of random, interdependence between componentsCase studies will traverse in loose order the path:
ABL -> DS -> API -> Foreign DS
Almost all have dimensions of other components.
Case Study Selection Criteria:
Recent is better (keep it fresh)
Points for “timeliness”
Exemplary behavior for component or component set
NOTE: Just about any sampling: Shows a kind of random, interdependence between components
28. DB-16: In Any Case, the Devil’s in the DataServer Details
Appreciate intricacies amongst components
Demonstrate the somewhat “arbitrary” diversity DataServer issues
Create opportunities to discuss important aspects of the DataServer architecture.
Promote the interest of a problem-solver audience Case Study: Goals Restated: “The devils in the Dataserver details” and there’s no shortage of detail
Given that: Jot down notes about a particular case study when you have them
but hold your questions until the end of the session.Restated: “The devils in the Dataserver details” and there’s no shortage of detail
Given that: Jot down notes about a particular case study when you have them
but hold your questions until the end of the session.
29. DB-16: In Any Case, the Devil’s in the DataServer Details DataServer deference given to the database
On Security
On Transaction Control
On Lock Management
On Cursor Consistency
Case Study: Prerequisite & Disclaimer What a DataServer IS and IS NOT
= A translator for SQL Databases operating against the ABL (relationship manager)
<> Database manager (per se): These are the strengths of Databases, not DataServers
DataServers = no interest in controlling these inherent strengths (list bullets)
Rather: DataServers provide compliance & management within the context of an ABL application
For example: Each data manager handles transaction rollback & recovery but …
DataServers happen to do it within context of OpenEdge transaction scoping rules.What a DataServer IS and IS NOT
= A translator for SQL Databases operating against the ABL (relationship manager)
<> Database manager (per se): These are the strengths of Databases, not DataServers
DataServers = no interest in controlling these inherent strengths (list bullets)
Rather: DataServers provide compliance & management within the context of an ABL application
For example: Each data manager handles transaction rollback & recovery but …
DataServers happen to do it within context of OpenEdge transaction scoping rules.
30. DB-16: In Any Case, the Devil’s in the DataServer Details I ran a pro-to-<dataserver> migration and compiled some code against the schema holder it built. Everything compiled just fine.
Then I built a separate schema holder, pulled definitions from the very same SQL database, and connected to it. But, against this schema holder, the compiled code fails.
Why would it matter how we create the schema holder as long as the structure of the SQL database remains exactly the same !
The Devils in the Details: Case Study #1 Case Study #1
This was a customer new to DataServers confused about some basic DataServer architecture
The Answer:
Well, it does matter how you create the schema
And there are two ways to do so …Case Study #1
This was a customer new to DataServers confused about some basic DataServer architecture
31. DB-16: In Any Case, the Devil’s in the DataServer Details DataServer Migration Migration = Push (schema+data) & Pull (schema)
Update Schema = Pull (schema only)
To appreciate why each produces a different schema “image”
Recall “Item-no” example …
After Migration, the name in the original database & in schema image = same, Foreign DB = different
There are many potential differences:
Names (and other things) are different because of nuances/restrictions over translation
Schema holders -> different because database architecture/constructs different -> DS schema interprets them
Field names
Data types
column positions+
Database Differences
Different types of triggers
Some have sequences, others don’t
Some row size limits are 32k, others are 8k
Some allow 4000 columns in a table, others only 256.
The result: schemas are different per DataServer, per OpenEdge release, etc.
Migration = Push (schema+data) & Pull (schema)
Update Schema = Pull (schema only)
To appreciate why each produces a different schema “image”
Recall “Item-no” example …
After Migration, the name in the original database & in schema image = same, Foreign DB = different
There are many potential differences:
Names (and other things) are different because of nuances/restrictions over translation
Schema holders -> different because database architecture/constructs different -> DS schema interprets them
Field names
Data types
column positions+
Database Differences
Different types of triggers
Some have sequences, others don’t
Some row size limits are 32k, others are 8k
Some allow 4000 columns in a table, others only 256.
The result: schemas are different per DataServer, per OpenEdge release, etc.
32. DB-16: In Any Case, the Devil’s in the DataServer Details Why don’t my schema holder “pull” definitions match the OpenEdge Database they were derived from ?
The script that built the SQL Database came directly from OpenEdge Database !
The Devils in the Details: Case Study #1 (cont.) Ok, true: Both DB’s definitions don’t “match” but …
Both are OpenEdge DB’s made in the “image” of a foreign DB
Both are compatible w/foreign DB & can be run against the foreign DB
Same: In terms of OpenEdge Language => both are fully operable and functional
Difference: In terms of r-code compile resolution, the schema “image” definitions are differentOk, true: Both DB’s definitions don’t “match” but …
Both are OpenEdge DB’s made in the “image” of a foreign DB
Both are compatible w/foreign DB & can be run against the foreign DB
33. DB-16: In Any Case, the Devil’s in the DataServer Details DataServer Pushed & Pulled Independently Recall “Item-no” example
Observe what happens when the push and pull are performed independently …
Just run the push script: Item-no ISN’T valid in SQL so in the foreign DS => becomes “item_no”
Now just pull data: “Item_no” IS a valid OpenEdge name
So in the schema holder, the SQL name is just copied (unmodified)
So why did the migration produce “Item-no” in the schema image and
the pull produced “Item_no” in the schema image ?
Only the migrated database is reconciled against the original DB (this is an automatic operation of the migration)
Unless you run the Adjust Schema on the pulled DB, the original database and schema holder are unreconciled.
Answer to the customer was:
Run the adjust-schema on the pulled schema holder using the original DB
Then your pulled schema image will match your migrated image.Recall “Item-no” example
Observe what happens when the push and pull are performed independently …
Just run the push script: Item-no ISN’T valid in SQL so in the foreign DS => becomes “item_no”
Now just pull data: “Item_no” IS a valid OpenEdge name
So in the schema holder, the SQL name is just copied (unmodified)
So why did the migration produce “Item-no” in the schema image and
the pull produced “Item_no” in the schema image ?
Only the migrated database is reconciled against the original DB (this is an automatic operation of the migration)
Unless you run the Adjust Schema on the pulled DB, the original database and schema holder are unreconciled.
Answer to the customer was:
Run the adjust-schema on the pulled schema holder using the original DB
Then your pulled schema image will match your migrated image.
34. DB-16: In Any Case, the Devil’s in the DataServer Details Pull & Adjust Schema If you only do an Update Schema (i.e. just pull)
But, you have a copy of original OpenEdge Database ?
Connect both
Run “Adjust Schema” to “match-up” or RECONCILE original DB with schema holder DB
Now the compile against migrated schema => won’t fail against pulled schema !
Whenever, foreign definitions <> schema image => errors, i.e.,
Reference mismatches @ compile time
CRC mismatch @ run-timeIf you only do an Update Schema (i.e. just pull)
But, you have a copy of original OpenEdge Database ?
Connect both
Run “Adjust Schema” to “match-up” or RECONCILE original DB with schema holder DB
Now the compile against migrated schema => won’t fail against pulled schema !
Whenever, foreign definitions <> schema image => errors, i.e.,
Reference mismatches @ compile time
CRC mismatch @ run-time
35. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #2 We deployed a new schema holder to our customers and their production applications started crashing with no warning. We didn’t make any code changes but if we do, the application still crashes, only with a different error. What would cause the DataServer’s inconsistent behavior and how can we get an indication of the problem ? Case Study #2
Very hard to get an indication of the problem because this one was self-induced.
The Problem:
They changed schema without updating all their changes to the schema holder
They had the “skip-schema-check” option turned on in their client application.
This masked the schema mismatch between their foreign DB and the schema image.
Recall: “Pull” process loads the “foreign image” into a schema holder
R-code is compiled against that image
When you execute R-code, & the run-time 1st references a given table not previously seen => schema check
“Skip-schema-check” means skip the run-time schema check (usually for production performance)
When the schema image doesn’t match the foreign DB ?
The mismatch can lead to:
Data truncation
Rolling back a delete would recreate the wrong record
non-numeric value in an integer columns
who knows ? (specific failure = unknown, failure = certain
The Answer
Don’t use skip-schema-check if you ever plan to change the schema
Find the mismatched tables and re-pull them from the foreign DB and/or re-reconcile them against the OpenEdge original DB and re-migrate them. You can’t just change your schema holder without reconciling the changes against the foreign database (and visa-versa).
Maintaining the original OpenEdge databases with your schema changes allows you to do an incremental update when you change schema.Case Study #2
Very hard to get an indication of the problem because this one was self-induced.
The Problem:
They changed schema without updating all their changes to the schema holder
They had the “skip-schema-check” option turned on in their client application.
This masked the schema mismatch between their foreign DB and the schema image.
Recall: “Pull” process loads the “foreign image” into a schema holder
R-code is compiled against that image
When you execute R-code, & the run-time 1st references a given table not previously seen => schema check
“Skip-schema-check” means skip the run-time schema check (usually for production performance)
When the schema image doesn’t match the foreign DB ?
The mismatch can lead to:
Data truncation
Rolling back a delete would recreate the wrong record
non-numeric value in an integer columns
who knows ? (specific failure = unknown, failure = certain
The Answer
Don’t use skip-schema-check if you ever plan to change the schema
Find the mismatched tables and re-pull them from the foreign DB and/or re-reconcile them against the OpenEdge original DB and re-migrate them. You can’t just change your schema holder without reconciling the changes against the foreign database (and visa-versa).
Maintaining the original OpenEdge databases with your schema changes allows you to do an incremental update when you change schema.
36. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #3 I compiled my application against an OpenEdge Database. Then, I did the same against my Schema Holder. Why did the DataServer r-code grow larger than my OpenEdge database r-code ? The problem:
There is none
Each DataServer:
has its own record descriptors
has its own compile layer
has its own unique translation requirements
The answer:
Size mismatch is ok and expected between the different “personalities” of the “federation”.
Old days: DataServer code size delta could overflow the r-code segment.
told people -> Rearrange code using internal procedures
Now: ABL has 4 action code segments; usually make size a non-issueThe problem:
There is none
Each DataServer:
has its own record descriptors
has its own compile layer
has its own unique translation requirements
The answer:
Size mismatch is ok and expected between the different “personalities” of the “federation”.
Old days: DataServer code size delta could overflow the r-code segment.
told people -> Rearrange code using internal procedures
Now: ABL has 4 action code segments; usually make size a non-issue
37. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #4 During a migration, I’m getting truncation errors loading data into my SQL Server database . I elected the “load data” option. The schema copied fine but migrating data from my OpenEdge database to a DataServer schema holder fails. Why would the migration create definitions too small for my data ? Answer:
Because you need to manage them.
The Problem:
OpenEdge database doesn’t care about size of a character column
SQL databases require a length for character columns
Migration uses FORMAT length by default but format designed = DISPLAY length
Alternatively, x(8) override to x(30) - expands default, might help a little or
The best solution:
Set SQL Width: Use “dbtool” from the Dictionary prior to migration.
This will scan the OpenEdge DB for the data size of character content
providing a width wide enough for all your existing data.Answer:
Because you need to manage them.
The Problem:
OpenEdge database doesn’t care about size of a character column
SQL databases require a length for character columns
Migration uses FORMAT length by default but format designed = DISPLAY length
Alternatively, x(8) override to x(30) - expands default, might help a little or
38. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #5 I’m migrating an OpenEdge database to a DataServer schema. When I elect to load data, I get NULL-constraint and unique-constraint violations. The problem:
All OpenEdge fields can store the unknown value
And, during a Migration, the Dictionary maps OpenEdge “unknowns” over to SQL NULL
But, Only “null-capable” columns in SQL can store NULL
So what happens if an OpenEdge field w/”unknown” is mapped to a SQL column with a NULL constraint ?
The happened because:
The Migration puts a NULL-constraint (i.e., NOT NULL) on “mandatory” fields
So, how did the “unknown” values get into a mandatory field ?
The answer:
The migration’s “load” procedure bypasses validation expression checkers (i.e., validation applied to interactive apps. only)
Therefore: Migrating a mandatory field containing unknown …
Produces: A NULL value in a NULL-constraint SQL column = Violation
The solution:
Evaluate your mandatory fields checking for unknown values before you migrate.
Try setting default value for these columns or turning off the mandatory option.
Problem 2: Just take problem 1 a step further and you have unique-constraint violations
OpenEdge field can always store > 1 row column w/“unknown”, even if it is defined “unique”
SQL column usually can’t store > 1 row w/NULL if it is “unique-constrained“
If a unique field in your OpenEdge has “mandatory” set and has >1 row w/“unknown” value
It will produce a “unique-constrained” SQL column w/> 1 NULL = Violation
The solution:
Same as Problem 1 – Evaluate whether your mandatory fields have unknown values before you migrate.
Try setting default values for the mandatory columns or turning off mandatory.The problem:
All OpenEdge fields can store the unknown value
And, during a Migration, the Dictionary maps OpenEdge “unknowns” over to SQL NULL
But, Only “null-capable” columns in SQL can store NULL
So what happens if an OpenEdge field w/”unknown” is mapped to a SQL column with a NULL constraint ?
The happened because:
The Migration puts a NULL-constraint (i.e., NOT NULL) on “mandatory” fields
So, how did the “unknown” values get into a mandatory field ?
The answer:
The migration’s “load” procedure bypasses validation expression checkers (i.e., validation applied to interactive apps. only)
Therefore: Migrating a mandatory field containing unknown …
Produces: A NULL value in a NULL-constraint SQL column = Violation
The solution:
Evaluate your mandatory fields checking for unknown values before you migrate.
Try setting default value for these columns or turning off the mandatory option.
Problem 2: Just take problem 1 a step further and you have unique-constraint violations
OpenEdge field can always store > 1 row column w/“unknown”, even if it is defined “unique”
SQL column usually can’t store > 1 row w/NULL if it is “unique-constrained“
If a unique field in your OpenEdge has “mandatory” set and has >1 row w/“unknown” value
It will produce a “unique-constrained” SQL column w/> 1 NULL = Violation
The solution:
Same as Problem 1 – Evaluate whether your mandatory fields have unknown values before you migrate.
Try setting default values for the mandatory columns or turning off mandatory.
39. DB-16: In Any Case, the Devil’s in the DataServer Details Record Write & AvailabilityDEFINE BUFFER xcust FOR cust.CREATE cust.cust-num = 111.FIND xcust
WHERE xcust.cust-num = 111. The Devils in the Details: Case Study #6 (Intro.) OpenEdge creates/writes records when supplied “keys” (values for indexed fields)
DataServers create/write at end of record scope
In this sample: Record not found => Not created when FIND executes
NOTE: Records are scoped to outermost block in which they’re used
Here, record scope = whole procedure (implicitly)OpenEdge creates/writes records when supplied “keys” (values for indexed fields)
DataServers create/write at end of record scope
In this sample: Record not found => Not created when FIND executes
NOTE: Records are scoped to outermost block in which they’re used
Here, record scope = whole procedure (implicitly)
40. DB-16: In Any Case, the Devil’s in the DataServer Details Record Write & AvailabilityDEFINE BUFFER xcust FOR cust.CREATE cust.cust-num = 111.VALIDATE cust. /* or RELEASE cust. */
FIND xcust WHERE xcust.cust-num = 111. The Devils in the Details: Case Study #6 (Intro.) Forcing immediate Create/Write by:
VALIDATE
RELEASE
RECID
Transaction boundary
NOTE: Prior to end-of-record-scope (or any of the above) -> ASSIGN values just sit in the buffer
Forcing immediate Create/Write by:
VALIDATE
RELEASE
RECID
Transaction boundary
NOTE: Prior to end-of-record-scope (or any of the above) -> ASSIGN values just sit in the buffer
41. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #6 I’m getting a STOP condition trying to assign a column value after a FIND statement. The application exits after the ASSIGN. How can I be sure this isn’t corrupting my database ?
DO TRANSACTION:FIND FIRST cust.ASSIGN name = FILL(“a”,35) NO-ERROR.
END. The Problem:
Customer is assigning 35 characters to field whose max column size is less than 35 …
This leads to truncation errors.
Myths debunked about what’s actually happening:
Not a “STOP” condition. Is an “error” (i.e., value > max column size)
Error not happening @ ASSIGN stmt, it happens at transaction boundary when write is validated Until end-of-scope, the buffer values just sit in memory.
Since record ^written => no corruption
The solution:
Add error handling to deal with the condition before its too late @ transaction END.The Problem:
Customer is assigning 35 characters to field whose max column size is less than 35 …
This leads to truncation errors.
Myths debunked about what’s actually happening:
Not a “STOP” condition. Is an “error” (i.e., value > max column size)
Error not happening @ ASSIGN stmt, it happens at transaction boundary when write is validated Until end-of-scope, the buffer values just sit in memory.
Since record ^written => no corruption
42. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #6 (cont.) But, I’m still getting the error condition
DO TRANSACTION:FIND FIRST cust.ASSIGN name = FILL(“a”,35)
NO-ERROR.VALIDATE cust.
END. RECID forces validation and WRITE @ ASSIGN & the validation produces the ERROR (before trans. bounds)
Could have used:
RECID (see comment)
RELEASE
overwritten the same buffer
created a transaction boundary
RELEASE & VALIDATE: They both operate on the ABL buffer, not the foreign DB
VALIDATE retains the buffer but validates its contents
RELEASE recycles the buffer in the application
Both operations have the affect of invoking a database write
WRITE & COMMIT: @VALIDATE means a copy of the record = written to DB memory
May be committed or rolled back between the write and the commit boundary
RELEASE: Makes is confusing because in the ABL, you’ve lost your buffer reference
But in the database, you still have an uncommitted write event
NOTE: No way to reorg. Code to make ASSIGN end-of-record-scope, i.e., this happens @ block-level
Anyway, customer is still gets ERROR because he doesn’t have error handling …RECID forces validation and WRITE @ ASSIGN & the validation produces the ERROR (before trans. bounds)
Could have used:
RECID (see comment)
RELEASE
overwritten the same buffer
created a transaction boundary
RELEASE & VALIDATE: They both operate on the ABL buffer, not the foreign DB
VALIDATE retains the buffer but validates its contents
RELEASE recycles the buffer in the application
Both operations have the affect of invoking a database write
WRITE & COMMIT: @VALIDATE means a copy of the record = written to DB memory
May be committed or rolled back between the write and the commit boundary
RELEASE: Makes is confusing because in the ABL, you’ve lost your buffer reference
But in the database, you still have an uncommitted write event
NOTE: No way to reorg. Code to make ASSIGN end-of-record-scope, i.e., this happens @ block-level
Anyway, customer is still gets ERROR because he doesn’t have error handling …
43. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #6 (cont.)
DO TRANSACTION:FIND FIRST cust.ASSIGN name = FILL(“a”,35) /* NO-ERROR */.VALIDATE cust NO-ERROR.IF error-status:error THEN DO: <some error processing> UNDO, LEAVE.END.
END. So to avoid the error:
Process error (w/NO-ERROR) on the VALIDATE
UNDO transaction in advance of “end-of-transaction” scoping in example
NOTE: Sometimes MSS/ODBC DS’s do some “validation” @ ASSIGN.
Don’t require VALIDATE to produce error
Why: Design difference - MSS/ODBC store some validation information in client record structure.
Oracle doesn’t.
So to avoid the error:
Process error (w/NO-ERROR) on the VALIDATE
UNDO transaction in advance of “end-of-transaction” scoping in example
44. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #6 (cont.) But, I’m still getting the error condition
DO TRANSACTION ON ERROR UNDO,LEAVE:FIND FIRST cust.ASSIGN name = FILL(“a”,35) NO-ERROR.VALIDATE cust.
CATCH Progress.Lang.AppError ae:
MESSAGE "Inside AppError Catch".
IF ae:GetMessage(1) <> ? THEN
MESSAGE ae:GetMessage(1) ae:GetMessageNum(1).
ELSE
MESSAGE "ReturnError" ae:returnvalue view-as alert-box.
DELETE OBJECT ae.
END CATCH.
CATCH Progress.Lang.ProError pe:
MESSAGE "Inside ProError Catch".
REPEAT i = 1 TO pe:NumMessages:
PUT UNFORMATTED " Error Number: " pe:GetMessageNum(i)
FORMAT ">>>>>9" SKIP "Message: " pe:GetMessage(i)
FORMAT "x(73)" SKIP.
END.
DELETE OBJECT pe.
END CATCH.
END. By the way,
You can always try your hand at the new object-oriented catch object blocks. New in OpenEdge 00 ABL.By the way,
You can always try your hand at the new object-oriented catch object blocks. New in OpenEdge 00 ABL.
45. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #6 (cont.) Warning:
DO TRANSACTION:FIND FIRST cust EXCLUSIVE-LOCK.ASSIGN name = FILL(“a”,35) NO-ERROR.VALIDATE cust.CONTENTION EXPOSURE !
END. WARNING:
Case Study #6:
Highlights OpenEdge transaction-based versus SQL’s set-based orientation.
The reason DataServers write @ end-of-scope to limit lock exposure in set-based environment
The OpenEdge’s OLTP-bias does not write records as part of a set and can manage its own locks.
Using VALIDATE to force an immediate write increases lock exposure to other usersThe record changes are probably cached in memory but not written to disk
If a lot of activity goes on between the time of the VALIDATE and TRANSACTION END, he locked records create exposure for record contention until the locks are release at the end of the transacton.WARNING:
Case Study #6:
Highlights OpenEdge transaction-based versus SQL’s set-based orientation.
The reason DataServers write @ end-of-scope to limit lock exposure in set-based environment
The OpenEdge’s OLTP-bias does not write records as part of a set and can manage its own locks.
Using VALIDATE to force an immediate write increases lock exposure to other usersThe record changes are probably cached in memory but not written to disk
If a lot of activity goes on between the time of the VALIDATE and TRANSACTION END, he locked records create exposure for record contention until the locks are release at the end of the transacton.
46. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #7 I have two clients running simultaneously.Both create records and lock records exclusively on the same table. Why are they getting “table is use by another user” errors ?DO TRANSACTION: FIND FIRST cust EXCLUSIVE-LOCK. IF AVAILABLE cust THEN ASSIGN name = “Bob”. REPEAT: FIND NEXT cust EXCLUSIVE-LOCK. IF NOT AVAILABLE cust THEN LEAVE. Cnt = Cnt + 1. ASSIGN name = “Bob” + STRING(cnt). END.END. Lock Exposure on steroids …
The Problem:
The TRANSACTION block encompasses all the SQL requests included within it.
REPEAT is not an implicit transaction.
Exclusive locks are accumulated until end-of-transaction boundary.
Records written at end of transaction – which is the outermost blockLock Exposure on steroids …
The Problem:
The TRANSACTION block encompasses all the SQL requests included within it.
REPEAT is not an implicit transaction.
Exclusive locks are accumulated until end-of-transaction boundary.
Records written at end of transaction – which is the outermost block
47. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #7 (cont.) DO TRANSACTION: FIND FIRST cust EXCLUSIVE-LOCK. IF AVAILABLE cust THEN ASSIGN name = “Bob”. REPEAT: DO TRANSACTION: FIND NEXT cust EXCLUSIVE-LOCK. IF NOT AVAILABLE cust THEN LEAVE. Cnt = Cnt + 1. ASSIGN name = “Bob” + STRING(cnt). END. END. END.END. Adding an EXPLICIT transaction in the REPEAT loop doesn’t help unless …Foreign DS supports nested transactions.
Even if it does ….
the 2nd DO TRANSACTION – issues warning in ABL since nested transactions ^supported
MSS also does not.
Means everything is scoped to the outer transaction still
Adding an EXPLICIT transaction in the REPEAT loop doesn’t help unless …Foreign DS supports nested transactions.
Even if it does ….
the 2nd DO TRANSACTION – issues warning in ABL since nested transactions ^supported
MSS also does not.
Means everything is scoped to the outer transaction still
48. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #7 (cont.) DO TRANSACTION: FIND FIRST cust EXCLUSIVE-LOCK. IF AVAILABLE cust THEN ASSIGN name = “Bob”.END.REPEAT: DO TRANSACTION: FIND NEXT cust EXCLUSIVE-LOCK. IF NOT AVAILABLE cust THEN LEAVE. Cnt = Cnt + 1. ASSIGN name = “Bob” + STRING(cnt). END.
END.END. The Solution:
Take off the OUTER transaction block.
Scope lock granularity to a single iteration.
Risk of lock contention significantly reduced
The Solution:
Take off the OUTER transaction block.
Scope lock granularity to a single iteration.
Risk of lock contention significantly reduced
49. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #7 (cont.) Another example regarding cursor consistencyDEFINE VARIABLE num AS INT INITIAL 103.DO TRANSACTION: FIND cust WHERE cust = num EXCLUSIVE-LOCK. ASSIGN name = “Bob”.END.FIND cust WHERE cust-num = num.DISPLAY name. In an OpenEdge “Database” Application:
“name” after the 2nd FIND always guarantees “Bob” (that is, another user could not have updated the record)
Why:
EXCLUSIVE-LOCK downgraded to SHARE-LOCK and is held AFTER the transaction block.
With an OpenEdge database SHARE-LOCK, you can’t update the name but neither can anyone else.
With SQL and a SQL Database Application:
All record locks are dropped at transaction boundary. (another idiosyncrasy of SQL’s set-based architecture)
NOTE: There are a few SQL databases that can be made to hold a lock after a transaction but it’s not standard SQL behavior.
So, for DataServer applications, what’s the chance that “name” will equal “Bob” at the 2nd FIND statement ?
The Answer:
It’s based on database’s Cursor consistency for the 2nd FIND query
And, that cursor consistency is based on the transaction isolation level (set for the foreign DB connection).
With read-uncommitted, any other user’s change will be seen
With read-committed, other users who have committed changes will be seen With repeatable read, name = “Bob” is guaranteed (= Closest isolation level to OpenEdge’s SHARE-LOCK)In an OpenEdge “Database” Application:
“name” after the 2nd FIND always guarantees “Bob” (that is, another user could not have updated the record)
Why:
EXCLUSIVE-LOCK downgraded to SHARE-LOCK and is held AFTER the transaction block.
With an OpenEdge database SHARE-LOCK, you can’t update the name but neither can anyone else.
With SQL and a SQL Database Application:
All record locks are dropped at transaction boundary. (another idiosyncrasy of SQL’s set-based architecture)
NOTE: There are a few SQL databases that can be made to hold a lock after a transaction but it’s not standard SQL behavior.
So, for DataServer applications, what’s the chance that “name” will equal “Bob” at the 2nd FIND statement ?
The Answer:
It’s based on database’s Cursor consistency for the 2nd FIND query
And, that cursor consistency is based on the transaction isolation level (set for the foreign DB connection).
With read-uncommitted, any other user’s change will be seen
With read-committed, other users who have committed changes will be seen With repeatable read, name = “Bob” is guaranteed (= Closest isolation level to OpenEdge’s SHARE-LOCK)
50. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #8 Parenthesis around names in the WHERE clause of our join is [slow/fast], generates multiple queries and sends index hints with the queries.
For each customer, each order WHERE (order.custnum) = (cust.custnum):
Oracle documentation says joins won’t pass index hints on a server join. If we just remove the parenthesis:
For each customer, each order WHERE order.custnum = cust.custnum:
We only get one query, [better/worse] performance and no hints. NOTE: Slow->better,fast->worse is implementation-specific (I’ll explain later)
Usually the former, but not always.
The Problem:
With parenthesis, the query is joined on the client
Without parenthesis, the query is join by server
Query 1: A look @ “DataServ.lg” reveals two non-join selects with hints (one select per join table). These 2 results are combined on client
NOTE: Specifying QUERY-TUNING(NO-JOIN-BY-SQLDB) would produce the same resultant.
Query 2: Performs a JBS, no index hints and optimizer is allowed to make most efficient join plan.
Why: Not enough info. to make JBS possible. Compiler interprets parenthesis as implied “expression”.
Names & literals are normally resolved in advance of the query (also some expressions).
Most else is assumed to have client-side dependencies. It is the client’s job to resolve these at run-time.
NOTE: Only DS’s care about gaining JBS efficiency
For OpenEdge: The parenthesis are not a concern because - joins only one way. For DataServer: How the join is done affects performance.
Most of the time JBS is the better option – join gains optimizer efficiencies
Client join might be quicker when there are 2 customer records and 100,000 orders
Client only collects the 2 customer records once. Doesn’t need to join the 2 records to all 100,000 orders
and duplicate the customer information amongst all the records.
It might be possible for the compile layer to learn this syntactical exception and always choose JBS when the expression inside the parenthesis is only a name reference but its current status is as a “future enhancement”.
NOTE: Slow->better,fast->worse is implementation-specific (I’ll explain later)
Usually the former, but not always.
51. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #9 - Part I In our Oracle DataServer application, one of our users got a locked record error: “<table> in use by <user> on <tty>. Wait or press CTRL-C to exit. (121)“
Pressing CTRL-C does not actually do anything. Our client script isn’t trapping CTRL-C, so what is ?
NOTE: stty on Unix shows Ctrl-C is mapped The problem:
OCI driver changed attributes of signal mask and disabled Ctrl-C at DS connect time.
Tough one to understand in the environment but a function of it.
DataServer must go under the microscope to find this one.
The solution:
Was fixed by saving off a copy of the signal mask prior to connecting OCI.
Then, restoring that OCI signal mask when the DataServer is disconnected
The problem:
OCI driver changed attributes of signal mask and disabled Ctrl-C at DS connect time.
Tough one to understand in the environment but a function of it.
DataServer must go under the microscope to find this one.
The solution:
Was fixed by saving off a copy of the signal mask prior to connecting OCI.
Then, restoring that OCI signal mask when the DataServer is disconnected
52. DB-16: In Any Case, the Devil’s in the DataServer Details The Devils in the Details: Case Study #9 - Part II Why did I get a “System Error 126” on Windows connecting through the ODBC DataServer ?
"Specified driver could not be loaded due to system error 126" The “infamous” System Error 126 on Windows: (A thing of Folklore on the web)
Problem typically occurs when:
Some library couldn’t “load”, or can’t be found or isn’t being referenced properly.
Amongst the claims: System 126 was caused by:
various Installs/Uninstalls, especially in MS SQL Server
Power outages ?
Installing MDAC, or 3rd party database drivers led to driver mismatches.
MSS/Sybase – used to ship stacked drivers: “dblib” on top of “netlib” that would become incompatible with one another.
The Solution:
Now a rare occurrence with DataServer drivers (libs were consolidated)
Remedy: Right-click on the offending DLL. Choose the View Dependencies option or use MSVC Tools menu “Dependency Walker‘ pgm.
System 126 on Windows is similar to when the Oracle client can’t find “libclntsh” shared libraryThose errors are also few since “libcntsh” dynamcally loads now -- except on AIX where you still need to probuild to statically link “libclntsh”
API configuration problems are often manifested when trying to connect.
Driver considerations:
Not all drivers support the same compliance levels (ours do)
Not all vendors (or versions) implement features exactly the same way
Drivers have environmental dependencies
Drivers can have bugs
The “infamous” System Error 126 on Windows: (A thing of Folklore on the web)
Problem typically occurs when:
Some library couldn’t “load”, or can’t be found or isn’t being referenced properly.
Amongst the claims: System 126 was caused by:
various Installs/Uninstalls, especially in MS SQL Server
Power outages ?
Installing MDAC, or 3rd party database drivers led to driver mismatches.
MSS/Sybase – used to ship stacked drivers: “dblib” on top of “netlib” that would become incompatible with one another.
53. DB-16: In Any Case, the Devil’s in the DataServer Details Our Oracle DataServer application is running fine. Why does our application all of a sudden have a problem with an invalid cursor and then exit ? The Devils in the Details: Case Study #10 The Problem:
Customer has done something in the environment to cause a cursor to be invalidated
All DS’s use an internal cache to maintain references to open cursors.
Cache makes cursors re-usable - spare re-preparation of the query plan
Cursors in the cache belong to the session that prepares them.
They are NOT expected to be modified or disabled outside of the DS environment
What happened:
Had customer turn logging => VERBOSE
Cursor id tracking showed a cursor prepared and executed
Then a SEND-SQL alters an index over the same table as the cursor.
Next, appl. Receives return code 54 on a fetch (“resource busy”) trying to reuse the cursor.
Cursor appears to be destroyed
Why ?
SEND-SQL was performing - online index rebuild over table indexes the cursor was built over
ALTER INDEX <index-name> REBUILD ONLINE - invalidated the cursor at run-time
The Solution:
Don’t do any run-time maintenance that might modify an existing, cached, session cursor.The Problem:
Customer has done something in the environment to cause a cursor to be invalidated
All DS’s use an internal cache to maintain references to open cursors.
Cache makes cursors re-usable - spare re-preparation of the query plan
Cursors in the cache belong to the session that prepares them.
They are NOT expected to be modified or disabled outside of the DS environment
What happened:
Had customer turn logging => VERBOSE
Cursor id tracking showed a cursor prepared and executed
Then a SEND-SQL alters an index over the same table as the cursor.
Next, appl. Receives return code 54 on a fetch (“resource busy”) trying to reuse the cursor.
Cursor appears to be destroyed
Why ?
SEND-SQL was performing - online index rebuild over table indexes the cursor was built over
ALTER INDEX <index-name> REBUILD ONLINE - invalidated the cursor at run-time
The Solution:
Don’t do any run-time maintenance that might modify an existing, cached, session cursor.
54. DB-16: In Any Case, the Devil’s in the DataServer Details We migrated our legacy Oracle DataServer application from an earlier Progress implementation, that used “shadow columns”, to OpenEdge where it uses Function-based indexes. Why are we now seeing performance problems that appear to be related to Function Based Indexes ? The Devils in the Details: Case Study #11 In the 1st OpenEdge release, “shadow columns” were replaced by FBI’s (to support case-insensitivity )
Terms:
Shadow columns “shadow” a lower case or mixed case column with an equivalent uppercase column
Whenever the subject column (marked CI) is referenced, the shadow is substituted in its place =>
OpenEdge DS App. internally references the case-insensitive “shadow” column instead
FBI’s support case-insensitivity by applying indexes to the UPPER function which are placed on character columns in queries.
More seamless
Gets cost-based optimizer efficiencies
The Problem:
Admittedly: The exact mechanics of this study are unclear but problem/remedy are verified
Customer set Oracle “Optimizer_mode = CHOOSE”
They had no statistics for their database => Historically caused Oracle to “choose” mode RBO.
When customer moved to Oracle 10 where RBO=desupported & resulted in CBO but without the aid of statistics
= recipe for performance disaster
Solution:
Create statistics and use CBO
Customer wanted to retain RBO and re-migrate using shadow columns
Shadow column migration check box removed in 10.0A was reinstated in 10.1A
Could preserve RBO execution plan in the CBO environment w/“stored outlines” – designed to provide plan stability
What is RBO
RBO = set of 16 hueristics used to optimize query execution
RBO = Execution plan is unaffected by data statistics
RBO = deemphasized in Oracle 8, last supported in 9i, and desupported in 10.
Virtually all optimization since Oracle 8 onward - disable RBO – including all index hints
What is CBO
CBO has been strongly encouraged by Oracle since version 8
All hints, except RULE, cause CBO behavior from Version 8 onward.
An index hint and a RULE hint together in the same SQL query will negate the RULE hint.
Interestingly:
The ABL’s query analysis = rules-based.
Share similar heuristics to Oracle’s RBO
When the RBO and the ABL’s query analysis happened to derive the same index selection,
customers may have enjoyed a certain level of predictability in the derived query plan
But …In the 1st OpenEdge release, “shadow columns” were replaced by FBI’s (to support case-insensitivity )
Terms:
Shadow columns “shadow” a lower case or mixed case column with an equivalent uppercase column
Whenever the subject column (marked CI) is referenced, the shadow is substituted in its place =>
OpenEdge DS App. internally references the case-insensitive “shadow” column instead
FBI’s support case-insensitivity by applying indexes to the UPPER function which are placed on character columns in queries.
More seamless
Gets cost-based optimizer efficiencies
The Problem:
Admittedly: The exact mechanics of this study are unclear but problem/remedy are verified
Customer set Oracle “Optimizer_mode = CHOOSE”
They had no statistics for their database => Historically caused Oracle to “choose” mode RBO.
When customer moved to Oracle 10 where RBO=desupported & resulted in CBO but without the aid of statistics
= recipe for performance disaster
Solution:
Create statistics and use CBO
Customer wanted to retain RBO and re-migrate using shadow columns
Shadow column migration check box removed in 10.0A was reinstated in 10.1A
Could preserve RBO execution plan in the CBO environment w/“stored outlines” – designed to provide plan stability
What is RBO
RBO = set of 16 hueristics used to optimize query execution
RBO = Execution plan is unaffected by data statistics
RBO = deemphasized in Oracle 8, last supported in 9i, and desupported in 10.
Virtually all optimization since Oracle 8 onward - disable RBO – including all index hints
What is CBO
CBO has been strongly encouraged by Oracle since version 8
All hints, except RULE, cause CBO behavior from Version 8 onward.
An index hint and a RULE hint together in the same SQL query will negate the RULE hint.
Interestingly:
The ABL’s query analysis = rules-based.
Share similar heuristics to Oracle’s RBO
When the RBO and the ABL’s query analysis happened to derive the same index selection,
customers may have enjoyed a certain level of predictability in the derived query plan
But …
55. DB-16: In Any Case, the Devil’s in the DataServer Details DataServer Rule:USE-INDEX and BY clauses affect the SQL ORDER BY clause. This guarantees the order of the results but does NOT guarantee the index selections of an execution plan. The Devils in the Details: Case Study #11 (cont.) conclusions:
Regarding Collation:
For all Dataervers: If no order is specified, then any order is equally valid
Regarding Query processing:
ABL query analysis always generates an index selection
(For Oracle) that index is usually passed in the form of an Oracle hint
(Oracle DS) can’t guarantee optimizer will use the hint.
(No DS) can guarantee a particular index will be selected for an execution plan
(Side note) Other query conditions that prompt the Oracle DS to generate hints:
FIRST_ROWS hint generated for
FIND FIRST/LAST,
Dynamic FIND
SELECTs with WHERE clause by ROWID = ?
Lastly RECALL: All such hints and FBI’s - inherently invoke CBO, not RBO
Regardless of any expected symmetries (ABL & RBO, for instance)
Do not rely on an index hint or some “predicted” index selection to determine collation.There is no “default order”
Only USE-INDEX and a BY clause will guarantee a specific collation in a DS result set
If both BY and USE-INDEX exist
For index hints: deference given to USE-INDEX (then WHERE or BY could produce an index hint depending upon the ABL’s index analysis)
For collation: deference given to the BY clauseconclusions:
Regarding Collation:
For all Dataervers: If no order is specified, then any order is equally valid
Regarding Query processing:
ABL query analysis always generates an index selection
(For Oracle) that index is usually passed in the form of an Oracle hint
(Oracle DS) can’t guarantee optimizer will use the hint.
(No DS) can guarantee a particular index will be selected for an execution plan
(Side note) Other query conditions that prompt the Oracle DS to generate hints:
FIRST_ROWS hint generated for
FIND FIRST/LAST,
Dynamic FIND
SELECTs with WHERE clause by ROWID = ?
Lastly RECALL: All such hints and FBI’s - inherently invoke CBO, not RBO
Regardless of any expected symmetries (ABL & RBO, for instance)
Do not rely on an index hint or some “predicted” index selection to determine collation.There is no “default order”
Only USE-INDEX and a BY clause will guarantee a specific collation in a DS result set
If both BY and USE-INDEX exist
For index hints: deference given to USE-INDEX (then WHERE or BY could produce an index hint depending upon the ABL’s index analysis)
For collation: deference given to the BY clause
56. DB-16: In Any Case, the Devil’s in the DataServer Details We’ve converted our legacy DB2/400 database to run against the ODBC DataServer. Some of our existing tables don’t have indexes so we describe indexes in the schema holder that satisfy the DataServer’s ROWID requirements. But when we display results for these tables, from a query like the following, why do some records show up twice in our result set ?
FOR EACH <table>:
DISPLAY <table>.
END. The Devils in the Details: Case Study #12 Migration to ODBC DS normally identifies a unique index candidate for RECID
But, with the special “conversion” process (available for porting legacy Progress/400 databases to ODBC), a RECID candidate
might not be available amongst the existing table indexes
It must be entered by the user
Customer created a “virtual” index = an index description in the schema holder that doesn’t actually exist in the DB.
The index, real or virtual, needs to be “unique” in order to support RECID
The problem:
The customer had identified a virtual index to be the unique RECID Index in the schema holder
But the Index was not actually unique.
Remember: schema holder contains just an image of the database
If you mark an index unique in the “schema image” it doesn’t put an actual constraint on the foreign data
How did the non-unique “virtual” index cause duplicate records ?
The solution:
First, notice - no lock condition specified for his queryMigration to ODBC DS normally identifies a unique index candidate for RECID
But, with the special “conversion” process (available for porting legacy Progress/400 databases to ODBC), a RECID candidate
might not be available amongst the existing table indexes
It must be entered by the user
Customer created a “virtual” index = an index description in the schema holder that doesn’t actually exist in the DB.
The index, real or virtual, needs to be “unique” in order to support RECID
The problem:
The customer had identified a virtual index to be the unique RECID Index in the schema holder
But the Index was not actually unique.
Remember: schema holder contains just an image of the database
If you mark an index unique in the “schema image” it doesn’t put an actual constraint on the foreign data
How did the non-unique “virtual” index cause duplicate records ?
The solution:
First, notice - no lock condition specified for his query
57. DB-16: In Any Case, the Devil’s in the DataServer Details FOR EACH <table> SHARE-LOCK:
DISPLAY <table>.
END. The Devils in the Details: Case Study #12 (cont.) This is the equivalent of specifying a SHARE-LOCK (the OE default lock mode).
SHARE-LOCKs in Oracle are treated like NO-LOCK so this is “safe”
In MSS, its only safe if your isolation-level is READ-UNCOMMITTED.
In ODBC, its never safe
Not being proactive with your lock mode can have repercussion for your DS performance (come back to later).
EXCLUSIVE-LOCKs are always expensive – irrespective of the DS type
BTW:
Had this customer used NO-LOCK on this query, the problems would not have been seen in the resultsThis is the equivalent of specifying a SHARE-LOCK (the OE default lock mode).
SHARE-LOCKs in Oracle are treated like NO-LOCK so this is “safe”
In MSS, its only safe if your isolation-level is READ-UNCOMMITTED.
In ODBC, its never safe
Not being proactive with your lock mode can have repercussion for your DS performance (come back to later).
EXCLUSIVE-LOCKs are always expensive – irrespective of the DS type
BTW:
Had this customer used NO-LOCK on this query, the problems would not have been seen in the results
58. DB-16: In Any Case, the Devil’s in the DataServer Details FOR EACH <table> EXCLUSIVE-LOCK:
DISPLAY <table>.
END.
___________________________________
SELECT <columns> FROM <table> WHERE
<key-components> = <key-value>
-- or --
<RECID> = <recid-value> The Devils in the Details: Case Study #12 (cont.) SHARE-LOCKs in some DS’s and ALL EXCLUSIVE-LOCKs in all DS:
work with the equivalent of a “keyset”-driven cursors.
“keyset” cursors use a set of of (unique) keys (RECID in our case) to UNIQUELY describe the rows of a result set.
For the DISPLAY stmt in the example: The DS is iterating through the keyset selecting individual rows as needed by the application
An individual record is selected as shown …
If the key list is not unique, the same key will be in the “unique” key-list twice
Now the same record will be looked up more than once by the WHERE clause.
Another concern over a virtual index:
There is no real index to help with the keyset lookups – so there may be performance implicationsSHARE-LOCKs in some DS’s and ALL EXCLUSIVE-LOCKs in all DS:
work with the equivalent of a “keyset”-driven cursors.
“keyset” cursors use a set of of (unique) keys (RECID in our case) to UNIQUELY describe the rows of a result set.
For the DISPLAY stmt in the example: The DS is iterating through the keyset selecting individual rows as needed by the application
An individual record is selected as shown …
If the key list is not unique, the same key will be in the “unique” key-list twice
Now the same record will be looked up more than once by the WHERE clause.
Another concern over a virtual index:
There is no real index to help with the keyset lookups – so there may be performance implications
59. DB-16: In Any Case, the Devil’s in the DataServer Details DataServer Rule:Uniqueness is critical to proper functioning of DataServer cursors. Random access reads and all transactional activity is dependent on being able to locate an individual row.
The Devils in the Details: Case Study #12 (cont.) DataServers will add a unique identifier to end of non-unique index keys to allow the DataServer to return a specific row.
Oracle and MSS have a special column that can be added during migration (progress_recid) to produces uniqueness:
Uniqueness enables the DS to:
Obtain a lock
Delete a row
Update a row
Use a browser
Return a value for RECID/ROWID functions
Cursor to a specific row in a non-unique result set
Without uniqueness:
you can read rows, forward-only without any row locking capability
Oracle:
Implemented with sequence generator (dflt)
or native rowid
MSS:
Implemented with IDENTITY columns and an MSS INSERT trigger.
All data servers:
Allow a unique index to be ROWID.DataServers will add a unique identifier to end of non-unique index keys to allow the DataServer to return a specific row.
Oracle and MSS have a special column that can be added during migration (progress_recid) to produces uniqueness:
Uniqueness enables the DS to:
Obtain a lock
Delete a row
Update a row
Use a browser
Return a value for RECID/ROWID functions
Cursor to a specific row in a non-unique result set
Without uniqueness:
you can read rows, forward-only without any row locking capability
Oracle:
Implemented with sequence generator (dflt)
or native rowid
MSS:
Implemented with IDENTITY columns and an MSS INSERT trigger.
All data servers:
Allow a unique index to be ROWID.
60. DB-16: In Any Case, the Devil’s in the DataServer Details Why can’t data access through a DataServer be a fast as OpenEdge native access ? The Devils in the Details: Performance Study Because
Your translating integrated language instructions into dispatched SQL instructions
Your interfacing with non-native database access from a 3rd party driver context
Your taking a transaction-based architecture and applying it to a set-based paradigm
Because
Your translating integrated language instructions into dispatched SQL instructions
Your interfacing with non-native database access from a 3rd party driver context
Your taking a transaction-based architecture and applying it to a set-based paradigm
61. DB-16: In Any Case, the Devil’s in the DataServer Details FIND FIRST Customer NO-LOCK NO-ERROR.
IF AVAILABLE Customer THEN Cnt = 1.
REPEAT:
FIND NEXT Customer NO-ERROR.
IF NOT AVAILABLE (Customer) THEN LEAVE.
Cnt = Cnt + 1.
END.
OPEN QUERY q FOR EACH Customer NO-LOCK.
REPEAT:
GET NEXT q.
IF NOT AVAILABLE Customer THEN LEAVE.
Cnt = Cnt + 1.
END.
CLOSE QUERY q. The Devils in the Details: Performance Study If you do nothing else ….
Replace your FINDs. Get out prolint and prorefactor and start making this change.
Also the Progress Profiler tool – unsupported tool that ships with OpenEdge - traces code to identify problem areas.
Two designs with the same results: the top is more transactional versus bottom which is set-based.
OPEN/CLOSE signify “containment” and provide a more “set-based” application environment
“scoping” the result set in this way can produce dramatic efficiencies for a DataServer application.
FINDs are open-ended and global. They allow both cursored and random access controls; They encourage transactional software design.
Replacing FIND’s with QUERYs will likely improve your OpenEdge peformance but the variation in DataServers performance is more significant.If you do nothing else ….
Replace your FINDs. Get out prolint and prorefactor and start making this change.
Also the Progress Profiler tool – unsupported tool that ships with OpenEdge - traces code to identify problem areas.
Two designs with the same results: the top is more transactional versus bottom which is set-based.
OPEN/CLOSE signify “containment” and provide a more “set-based” application environment
“scoping” the result set in this way can produce dramatic efficiencies for a DataServer application.
FINDs are open-ended and global. They allow both cursored and random access controls; They encourage transactional software design.
Replacing FIND’s with QUERYs will likely improve your OpenEdge peformance but the variation in DataServers performance is more significant.
62. DB-16: In Any Case, the Devil’s in the DataServer Details Replace FIND statements with FORs & QUERYs
FIND FIRST customer -> FOR FIRST customer: END.
FIND LAST order -> bBuffer:FIND-LAST().
Be explicit about lock type
Field Lists
FOR EACH customer FIELDS(cust-num name) NO-LOCK:
Make sure to include all fields you reference. This compiles:FIND FIRST customer FIELDS(cust-num) WHERE CAN-FIND(FIRST order WHERE order.st = cust.st)
Write JOIN-BY-SQLDB queries
FOR EACH customer, EACH order:
Index Reposition
OPEN QUERY q1 FOR EACH order INDEXED-REPOSITION
REPOSITION q1 to recid myid The Devils in the Details: Performance Study More about FIND’s & how they work:
Because the server retrieves and caches a number of record identifiers and then retrieves individual rows
This is slow, but necessary to emulate Progress cursor interaction
FINDs generate significantly more SQL and round trips to the server.
Require index synchronization amongst other FIND’s as a language prerequisite
Always gets an entire record (ignores field list)
Usually requires a new query in order to change direction
Removing them will likely improve OE performance as well DS
NOTE: In MSS & ODBC DSs
FIND LAST/GET LAST are especially slow if the cursor is currently position far from that desired record
because the client will continue to make GET NEXT requests until there are no more. (Oracle has been optimized)
________________________________________________________________________________________________
Be explicit about lock type: Already discussed
________________________________________________________________________________________________
Use Field Lists: Especially if networked to both DataServer & Data Source.
Put LOBs at the end of your records and/or take them out of field lists when possible. They force cursor downgrades, late binding & extra result set processing.
NOTE: You must make sure to include fields you reference. The OE compiler may not find all references.
Also, the compiler knows only about the procedure it compiles, not references in other procedures sharing buffers and variables.
OE never guarantees the field list – only that the columns in it will be included.
Only useful to NO-LOCK queries. Other lock modes require all columns.
________________________________________________________________________________________________
Join-by-server: works only for inner joins (today).
Usually an advantage to using the DB optimizer. Occasionally, join by client peforms better.
Example: 2 customer records, each with 5000 orders. JBS will copy the 2 customer rows 5000 times, one per order. Client joins asks once.
________________________________________________________________________________________________
Index Reposition: Speeds up random access within a query definition
NOTE: Query produce by index repos cannot be limited to a specific number of rows (performance consideration)
More about FIND’s & how they work:
Because the server retrieves and caches a number of record identifiers and then retrieves individual rows
This is slow, but necessary to emulate Progress cursor interaction
FINDs generate significantly more SQL and round trips to the server.
Require index synchronization amongst other FIND’s as a language prerequisite
Always gets an entire record (ignores field list)
Usually requires a new query in order to change direction
Removing them will likely improve OE performance as well DS
NOTE: In MSS & ODBC DSs
FIND LAST/GET LAST are especially slow if the cursor is currently position far from that desired record
because the client will continue to make GET NEXT requests until there are no more. (Oracle has been optimized)
________________________________________________________________________________________________
Be explicit about lock type: Already discussed
________________________________________________________________________________________________
Use Field Lists: Especially if networked to both DataServer & Data Source.
Put LOBs at the end of your records and/or take them out of field lists when possible. They force cursor downgrades, late binding & extra result set processing.
NOTE: You must make sure to include fields you reference. The OE compiler may not find all references.
Also, the compiler knows only about the procedure it compiles, not references in other procedures sharing buffers and variables.
OE never guarantees the field list – only that the columns in it will be included.
Only useful to NO-LOCK queries. Other lock modes require all columns.
________________________________________________________________________________________________
Join-by-server: works only for inner joins (today).
Usually an advantage to using the DB optimizer. Occasionally, join by client peforms better.
Example: 2 customer records, each with 5000 orders. JBS will copy the 2 customer rows 5000 times, one per order. Client joins asks once.
________________________________________________________________________________________________
Index Reposition: Speeds up random access within a query definition
NOTE: Query produce by index repos cannot be limited to a specific number of rows (performance consideration)
63. DB-16: In Any Case, the Devil’s in the DataServer Details Query structure
Don’t get fancy
WHERE (city + STRING(“,”) + state) = …
Complex joins may require a client join or client selection
WHERE col1 = INTEGER(‘123’) is better than WHERE STRING(col1) = “123”
Try to make your BY clause and the expected INDEX selection compatible to avoid reordering: WHERE col1 = <> AND col2 = <> By col1, BY col2
The Devils in the Details: Performance Study Don’t get too fancy
Query structure
WHERE (city + STRING(“,”) + state) =
Expression resolved on client after record retrieval, can’t be executed on the server & needs client selection to filter results
Complex Joins
There are many functions and keywords that the client does not trust JBS with (e.g. LEFT OUTER).
Many OE functions do not map at all –or- do not map well enough to the foreign data source thus require client resolutionSo wrapping a column reference into an OpenEdge FUNCTION, means the client must resolve it
NOTE: This is an evolving area of potential improvement._____________________________________________________________
WHERE col1 = INTEGER(‘123’) is better than WHERE STRING(col1) = “123”
Exposing col1 to the optimizer means
The column reference can be passed to the server
Index selection can help the query plan.____________________________________________________________
WHERE col1 = <> AND col2 = <> By col1, BY col2
Aids the optimizer because no additional sorting is required after index selectionDon’t get too fancy
Query structure
WHERE (city + STRING(“,”) + state) =
Expression resolved on client after record retrieval, can’t be executed on the server & needs client selection to filter results
Complex Joins
There are many functions and keywords that the client does not trust JBS with (e.g. LEFT OUTER).
Many OE functions do not map at all –or- do not map well enough to the foreign data source thus require client resolutionSo wrapping a column reference into an OpenEdge FUNCTION, means the client must resolve it
NOTE: This is an evolving area of potential improvement._____________________________________________________________
WHERE col1 = INTEGER(‘123’) is better than WHERE STRING(col1) = “123”
Exposing col1 to the optimizer means
The column reference can be passed to the server
Index selection can help the query plan.____________________________________________________________
WHERE col1 = <> AND col2 = <> By col1, BY col2
Aids the optimizer because no additional sorting is required after index selection
64. DB-16: In Any Case, the Devil’s in the DataServer Details Query Tuning
CACHE-SIZE(<size>)
REVERSE-FROM
(no-index-hint hint “run fast”)
NO-BIND-WHERE
NO-UNIQUE-ORDER-ADDED
Database Optimizations
Indexes, covering idx’s (clustered idx (MSS) & “included” columns– MSS2005)
Updated Statistics
Index rebuild/reorg & Fill Factors, etc. The Devils in the Details: Performance Study There’s no -Dsrv PANACEA switch. Switches are “tweaks”
Good coding practices and good data management should take priority
So write good queries and optimize your data access (such as described in slide)
*** NEXT ***
CACHE-SIZE - used by “lookahead cursors” (for queries). “lookahead” are result set records fetched in advance of their being needed.
Default cache “block” is 8k (in Oracle), & 30k (in MSS/ODBC) In general, default is adequate but very large queries should expand size and very small queries should reduce the size with QUERY-TUNING.
REVERSE-FROM:
Reverses order of tables listed in FROM clause so the foreign DS chooses tables in reverse order to drive the join (performance).
NO-INDEX-HINT HINT “run fast”)
Oracle: Add your own hints
NO-BIND-WHERE
The Oracle DataServer by default will always opt for cursor re-useability by substituting bind variables into the WHERE clause of a query. But if you don’t anticipate query reuse, the execution plan may be sub-optimal with bind values because a literal value passed to the optimizer can be quantified in terms of cost when the query plan is prepared. QUERY-TUNING(NO-BIND-WHERE) will allow you to pass literal values for the SQL passed into the optimizer.
NO-UNIQUE-ORDER-ADDED
A FOR EACH or QUERY (SCROLLING) needs to append PROGRESS_RECID to create uniqueness (for scrollability).
But, including PROGRESS_RECID may negate the use of an index that could optimize the query.
NO-UNIQUE-ORDER-ADDED will drop the PROGRESS_RECID from the BY clause to optimize the query
Lock Type
In OE applications, an unspecified lock is a SHARE-LOCK.This incurs overhead. Cursor is treated internally like an EXCLUSIVE-LOCK.
MS SQL Server READ-UNCOMMITTED will default <unspecified> or SHARE-LOCKs to NO-LOCK equivalents.
NO-LOCK queries in the MSS DataServer now combine connection pooling and firehose cursors for a dramatic improvement in NO-LOCK performance.
Query optimizations in MSS 2005 appear to enhance that performance even more.There’s no -Dsrv PANACEA switch. Switches are “tweaks”
Good coding practices and good data management should take priority
So write good queries and optimize your data access (such as described in slide)
*** NEXT ***
CACHE-SIZE - used by “lookahead cursors” (for queries). “lookahead” are result set records fetched in advance of their being needed.
Default cache “block” is 8k (in Oracle), & 30k (in MSS/ODBC) In general, default is adequate but very large queries should expand size and very small queries should reduce the size with QUERY-TUNING.
REVERSE-FROM:
Reverses order of tables listed in FROM clause so the foreign DS chooses tables in reverse order to drive the join (performance).
NO-INDEX-HINT HINT “run fast”)
Oracle: Add your own hints
NO-BIND-WHERE
The Oracle DataServer by default will always opt for cursor re-useability by substituting bind variables into the WHERE clause of a query. But if you don’t anticipate query reuse, the execution plan may be sub-optimal with bind values because a literal value passed to the optimizer can be quantified in terms of cost when the query plan is prepared. QUERY-TUNING(NO-BIND-WHERE) will allow you to pass literal values for the SQL passed into the optimizer.
NO-UNIQUE-ORDER-ADDED
A FOR EACH or QUERY (SCROLLING) needs to append PROGRESS_RECID to create uniqueness (for scrollability).
But, including PROGRESS_RECID may negate the use of an index that could optimize the query.
NO-UNIQUE-ORDER-ADDED will drop the PROGRESS_RECID from the BY clause to optimize the query
Lock Type
In OE applications, an unspecified lock is a SHARE-LOCK.This incurs overhead. Cursor is treated internally like an EXCLUSIVE-LOCK.
MS SQL Server READ-UNCOMMITTED will default <unspecified> or SHARE-LOCKs to NO-LOCK equivalents.
NO-LOCK queries in the MSS DataServer now combine connection pooling and firehose cursors for a dramatic improvement in NO-LOCK performance.
Query optimizations in MSS 2005 appear to enhance that performance even more.
65. DB-16: In Any Case, the Devil’s in the DataServer Details Remember you can always conditionalize:
Compile Time&GLOBAL-DEFINE DB-TYPE ORACLE&IF DEFINED ({&ORACLE}) &THEN …
Run TimeIF DBTYPE(dbname) = “PROGRESS” THEN RUN OpenEdge-optimized-code
ELSE RUN DataServer-optimized-code The Devils in the Details: Performance Study Route to code optimized for the data source
Using pre-processor variables
Or
Run-time variables
Applied to the DB-TYPE functionRoute to code optimized for the data source
Using pre-processor variables
Or
Run-time variables
Applied to the DB-TYPE function
66. DB-16: In Any Case, the Devil’s in the DataServer Details Use stored procedures:
RUN STORED-PROC send-sql-statement hdl1 = PROC-HANDLE (“select name, cust_num FROM customer”).FOR EACH proc-text-buffer WHERE PROC-HANDLE = hdl1: DISPLAY proc-text.END.CLOSE STORED-PROC send-sql-statement rtn-stat = PROC-STATUS WHERE PROC-HANDLE = hdl1.
DEF VAR ttHndl AS HANDLEDEF TEMP-TABLE tt1 ….ttHndl = TEMP-TABLE tt1:HANDLE.RUN STORED-PROC send-sql-statement (“select name, cust_num FROM customer”) LOAD-RESULT-INTO ttHndl.
DEF VAR ttHndl AS HANDLE EXTENT 2.DEF TEMP-TABLE tt1 … ttHndl[1] = TEMP-TABLE tt1:HANDLE.DEF TEMP-TABLE tt2 … ttHndl[2] = TEMP-TABLE tt2:HANDLE.RUN STORED-PROC send-sql-statement (“select name FROM customer; select order-num FROM order”) LOAD-RESULT-INTO ttHndl. The Devils in the Details: Performance Study Stored Procedures & SENDSQL
gain application performance by running complex queries/joins on the native server through an efficient RPC mechanism without ABL restrictions.
Send-sql-statement – runs any native SQL
Proc-text-buffer – pseudo table produces a character string of any results (which then require parsing by the application).
9.1E/10.0B introduces result sets loaded into temp-tables
The TEMP-TABLE populated automatically and have these benefits:
Most important: No need to “copy” data from the result set into the temp table (performance)
No need to define views in foreign data source to describe results
No need to parse the proc-text-buffer string
No schema admin.
No need to use PROC-HANDLEs.
OE developer has direct access to temp table schema and the query’s data results
TEMP-TABLE can (See “…” in code sampe):
already exist (static),
be formed at run-time (dynamic prepared)
or be generated automatically based on the result-set schema (using default mappings) (dynamic unprepared)
Can be one or an array of TEMP-TABLE handles to handle multiple result sets(3rd example)NOTE: Code works with Sql Server/Sybase. Same is done by passing cursors as parameters to stored procedures when using Oracle
TEMP-TABLEs can be used in PRODATASETs (NOTE: Updates are not propagated back to foreign data source (they are unlinked): Must re-read and EXCLUSIVE lock put on foreign DB)
Can collate DataServer PROGRESS_RECID or a single component index to the TEMP-TABLE’s rowid
Uses optimized cursors – which includes “firehose” for MSS.
__________________________________________________________________________________________________________________________
Oracle Cursor Sample
DEFINE VAR h1 AS INT NO-UNDO. MESSAGE "BEFORE RUN STORED-PROC" VIEW-AS ALERT-BOX. run stored-procedure FIND_customer2 h1=PROC-HANDLE (5,OUTPUT 0,OUTPUT 0, OUTPUT 0). MESSAGE "AFTER RUN STORED-PROC" VIEW-AS ALERT-BOX. FOR EACH proc-text-buffer WHERE PROC-HANDLE = h1 AND CURSOR =
FIND_customer2.c1:
DISPLAY proc-text-buffer.
END.
FOR EACH proc-text-buffer WHERE PROC-HANDLE = h1 AND CURSOR =
FIND_customer2.c2:
DISPLAY proc-text-buffer.
END.
FOR EACH proc-text-buffer WHERE PROC-HANDLE = h1 AND CURSOR =
FIND_customer2.c3:
DISPLAY proc-text-buffer.
END.
CLOSE STORED-PROC FIND_customer2 WHERE PROC-HANDLE = h1.
The Procedure:
create procedure find_customer2
(num in int,
c1 out curpkg.cur_type,
c2 out curpkg.cur_type,
c3 out curpkg.cur_type)
as BEGIN
OPEN c1 FOR
select cust_num,name from customer where customer.cust_num < num;
OPEN c2 FOR
select name,cust_num from customer where customer.cust_num > 12;
OPEN c3 FOR
select cust_num,city from customer where customer.cust_num > 40;
end; Stored Procedures & SENDSQL
gain application performance by running complex queries/joins on the native server through an efficient RPC mechanism without ABL restrictions.
Send-sql-statement – runs any native SQL
Proc-text-buffer – pseudo table produces a character string of any results (which then require parsing by the application).
9.1E/10.0B introduces result sets loaded into temp-tables
The TEMP-TABLE populated automatically and have these benefits:
Most important: No need to “copy” data from the result set into the temp table (performance)
No need to define views in foreign data source to describe results
No need to parse the proc-text-buffer string
No schema admin.
No need to use PROC-HANDLEs.
OE developer has direct access to temp table schema and the query’s data results
TEMP-TABLE can (See “…” in code sampe):
already exist (static),
be formed at run-time (dynamic prepared)
or be generated automatically based on the result-set schema (using default mappings) (dynamic unprepared)
Can be one or an array of TEMP-TABLE handles to handle multiple result sets(3rd example)NOTE: Code works with Sql Server/Sybase. Same is done by passing cursors as parameters to stored procedures when using Oracle
TEMP-TABLEs can be used in PRODATASETs (NOTE: Updates are not propagated back to foreign data source (they are unlinked): Must re-read and EXCLUSIVE lock put on foreign DB)
Can collate DataServer PROGRESS_RECID or a single component index to the TEMP-TABLE’s rowid
Uses optimized cursors – which includes “firehose” for MSS.
__________________________________________________________________________________________________________________________
Oracle Cursor Sample
DEFINE VAR h1 AS INT NO-UNDO. MESSAGE "BEFORE RUN STORED-PROC" VIEW-AS ALERT-BOX. run stored-procedure FIND_customer2 h1=PROC-HANDLE (5,OUTPUT 0,OUTPUT 0, OUTPUT 0). MESSAGE "AFTER RUN STORED-PROC" VIEW-AS ALERT-BOX. FOR EACH proc-text-buffer WHERE PROC-HANDLE = h1 AND CURSOR =
FIND_customer2.c1:
DISPLAY proc-text-buffer.
END.
FOR EACH proc-text-buffer WHERE PROC-HANDLE = h1 AND CURSOR =
FIND_customer2.c2:
DISPLAY proc-text-buffer.
END.
FOR EACH proc-text-buffer WHERE PROC-HANDLE = h1 AND CURSOR =
FIND_customer2.c3:
DISPLAY proc-text-buffer.
END.
CLOSE STORED-PROC FIND_customer2 WHERE PROC-HANDLE = h1.
The Procedure:
create procedure find_customer2
(num in int,
c1 out curpkg.cur_type,
c2 out curpkg.cur_type,
c3 out curpkg.cur_type)
as BEGIN
OPEN c1 FOR
select cust_num,name from customer where customer.cust_num < num;
OPEN c2 FOR
select name,cust_num from customer where customer.cust_num > 12;
OPEN c3 FOR
select cust_num,city from customer where customer.cust_num > 40;
end;
67. DB-16: In Any Case, the Devil’s in the DataServer Details In Summary Know thy Client Component (ABL & DataServer)
Know thy access component (drivers & configuration)
Know thy database component (data manager & engine)
Out of the box – Your Dataserver will “work” with the ABL
But the better you understand the ABL’s relationship to the DataServer, its access layers & the foreign DB,
The more efficient and useful your ABL code will become in your application environment.Out of the box – Your Dataserver will “work” with the ABL
But the better you understand the ABL’s relationship to the DataServer, its access layers & the foreign DB,
The more efficient and useful your ABL code will become in your application environment.
68. DB-16: In Any Case, the Devil’s in the DataServer Details For More Information, go to… PSDN
“DataServer Best Practices”
http://www.psdn.com/library/servlet/KbServlet/download/1320-102-620/ds_best_practices.pdf
Exchange 07:
DB-21 Data Management and Platforms
Roadmap and Info Exchange
(Wednesday 1:30-3:00 Room #200)
69. DB-16: In Any Case, the Devil’s in the DataServer Details
70. DB-16: In Any Case, the Devil’s in the DataServer Details Thank you for your time!
71. DB-16: In Any Case, the Devil’s in the DataServer Details