id
stringlengths 22
25
| commit_message
stringlengths 137
6.96k
| diffs
listlengths 0
63
|
|---|---|---|
derby-DERBY-1701-27fff3cf
|
DERBY-1701 (partial) change SURDataModelSetup to extend from BaseJDBCTestSetup and use its connection handling.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@432031 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1701-4292752a
|
DERBY-1701 (partial) Close statements and result sets in BLOBTest.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@431939 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1701-7ebfefeb
|
DERBY-1555 DERBY-1701 (partial) Add utiltiy methods to BaseJDBCTestCase to get Statements, PreparedStatements against
the default connection for the test.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@432222 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1701-a6464f22
|
DERBY-1701 (partial) Clean up some of the jdbcapi tests by closing statements when finished and
cleaning up the connection. The connection handling needs to be simplified by having default connection
handling in BaseJDBCTestCase.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@431799 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1701-df2b52c0
|
DERBY-1555 DERBY-1701 (partial) Change the name of the TestConfiguration methods to openConnection from getConnection.
Step to having BaseJDBCTestCase.getConnection() be a method matching BaseJDBCTestSetup.getConnection, a handle
to a default connection stored in the instance. This will remove a lot of code in the classes that extend BaseJDBCTestCase
that store a connection locally and have N different ways of cleaning it up.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@431919 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/BaseJDBCTestSetup.java",
"hunks": [
{
"added": [
" * @see TestConfiguration#openDefaultConnection()"
],
"header": "@@ -60,7 +60,7 @@ public abstract class BaseJDBCTestSetup",
"removed": [
" * @see TestConfiguration#getDefaultConnection()"
]
},
{
"added": [
" \treturn conn = getTestConfiguration().openDefaultConnection();"
],
"header": "@@ -70,7 +70,7 @@ public abstract class BaseJDBCTestSetup",
"removed": [
" \treturn conn = getTestConfiguration().getDefaultConnection();"
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/TestConfiguration.java",
"hunks": [
{
"added": [
" * Open connection to the default database.",
" public Connection openDefaultConnection()",
" return openConnection(getDatabaseName());",
" * Open a connection to a database.",
" * @return connection to database.",
" public Connection openConnection (String databaseName) throws SQLException {"
],
"header": "@@ -189,27 +189,27 @@ public class TestConfiguration {",
"removed": [
" * Get connection to the default database.",
" public Connection getDefaultConnection()",
" return getConnection(getDatabaseName());",
" * Get connection to a database.",
" * @return connection to default database.",
" public Connection getConnection (String databaseName) throws SQLException {"
]
}
]
}
] |
derby-DERBY-1701-fb2bfd52
|
DERBY-1555 DERBY-1701 (partial) Incremental step in changing the tests that extend
SURBaseTest to use the single connection provided by BaseJDBCTestCase. Added
initialize method to BaseJDBCTestCase to allow tests to have a consistent
initial state for a connection. The SURBaseTest still has its con variable,
will be cleaned up in subsequent commits.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@432438 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1701-ff658306
|
DERBY-1555 DERBY-1701 (partial) Change the tests that extend SURBaseTest to use the utilitiy methods
rather than the con field from SURBaseTest to fit into the generic single connection model provided
by BaseJDBCTestCase. Incremental development, next step will be to remove the con field from SURBaseTest.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@432450 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1701-ffe3f662
|
DERBY-1555 DERBY-1701 (partial) Improve the BaseJDBCTest by adding support for a default connection
exactly like BaseJDBCTestSetup. Provides consistent handling for a connection in the common case
of a test using just one. Removes duplicated/inconistent code across many tests. First step
has the getConnection method called getXConnection until all the tests ahve stopped using the getConnection
static method and instead use the openDefaultConnection method.
Change the tests in jdbcapi to the new scheme.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@431999 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1704-0f57d0e7
|
DERBY-1704 (partial) Allow more concurrency in the lock manager
Removed some unused code.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@499316 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/locks/SinglePool.java",
"hunks": [
{
"added": [],
"header": "@@ -504,19 +504,6 @@ public class SinglePool extends Hashtable",
"removed": [
"\t/*",
"\t** Non-public methods",
"\t*/",
"",
"//EXCLUDE-START-debug- ",
"",
" public String toDebugString()",
" {",
" return(lockTable.toDebugString());",
" }",
"",
"//EXCLUDE-END-debug- ",
"\t"
]
}
]
}
] |
derby-DERBY-1704-7a0cbb44
|
DERBY-1704 (partial) Allow more concurrency in the lock manager
Modified LockSpace so that it doesn't extend Hashtable.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@507428 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/locks/LockSpace.java",
"hunks": [
{
"added": [],
"header": "@@ -28,7 +28,6 @@ import org.apache.derby.iapi.util.Matchable;",
"removed": [
"import java.util.Hashtable;"
]
},
{
"added": [
" A LockSpace contains a hashtable keyed by the group reference,",
" the data for each key is a HashMap of Lock's.",
"final class LockSpace {",
"\t/** Map from group references to groups of locks. */",
"\tprivate final HashMap groups;"
],
"header": "@@ -39,12 +38,14 @@ import java.util.Iterator;",
"removed": [
" A LockSpace is a hashtable keyed by the group reference,",
"\tthe data for each key is a Hashtable of Lock's.",
"class LockSpace extends Hashtable {"
]
},
{
"added": [
"\t\tgroups = new HashMap();"
],
"header": "@@ -58,9 +59,9 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tsuper();"
]
},
{
"added": [
"\t\tHashMap dl = (HashMap) groups.get(group);"
],
"header": "@@ -71,7 +72,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tHashMap dl = (HashMap) get(group);"
]
},
{
"added": [
"\t\tHashMap dl = (HashMap) groups.remove(group);"
],
"header": "@@ -119,7 +120,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tHashMap dl = (HashMap) remove(group);"
]
},
{
"added": [
"\t\tif ((callbackGroup == null) && groups.isEmpty())"
],
"header": "@@ -127,7 +128,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tif ((callbackGroup == null) && isEmpty())"
]
},
{
"added": [
"\t\tgroups.put(group, dl);"
],
"header": "@@ -149,7 +150,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tput(group, dl);"
]
},
{
"added": [
"\t\tHashMap dl = (HashMap) groups.get(group);"
],
"header": "@@ -167,7 +168,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tHashMap dl = (HashMap) get(group);"
]
},
{
"added": [
"\t\t\tgroups.remove(group);",
"\t\t\tif ((callbackGroup == null) && groups.isEmpty())"
],
"header": "@@ -184,9 +185,9 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\t\tremove(group);",
"\t\t\tif ((callbackGroup == null) && isEmpty())"
]
},
{
"added": [
"\t\tHashMap from = (HashMap) groups.get(oldGroup);",
"\t\tHashMap to = (HashMap) groups.get(newGroup);",
"\t\t\tgroups.put(newGroup, from);",
"\t\t\tgroups.remove(oldGroup);"
],
"header": "@@ -195,16 +196,16 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tHashMap from = (HashMap) get(oldGroup);",
"\t\tHashMap to = (HashMap) get(newGroup);",
"\t\t\tput(newGroup, from);",
"\t\t\tremove(oldGroup);"
]
},
{
"added": [
"\t\t\tObject oldTo = groups.put(newGroup, from);"
],
"header": "@@ -213,7 +214,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\t\tObject oldTo = put(newGroup, from);"
]
},
{
"added": [
"\t\tgroups.remove(oldGroup);"
],
"header": "@@ -223,7 +224,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tremove(oldGroup);"
]
},
{
"added": [
"\t\tHashMap dl = (HashMap) groups.get(group);"
],
"header": "@@ -251,7 +252,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\tHashMap dl = (HashMap) get(group);"
]
},
{
"added": [
"\t\t\t\tgroups.remove(group);",
"\t\t\t\tif ((callbackGroup == null) && groups.isEmpty())"
],
"header": "@@ -276,9 +277,9 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\t\t\tremove(group);",
"\t\t\t\tif ((callbackGroup == null) && isEmpty())"
]
},
{
"added": [
"\t\treturn groups.containsKey(group);",
"\t}",
"",
"\t/**",
"\t * Return true if locks are held in this compatibility space.",
"\t * @return true if locks are held, false otherwise",
"\t */",
"\tsynchronized boolean areLocksHeld() {",
"\t\treturn !groups.isEmpty();",
"\t\tHashMap dl = (HashMap) groups.get(group);"
],
"header": "@@ -298,13 +299,21 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\treturn (get(group) != null);",
"\t\tHashMap dl = (HashMap) get(group);"
]
},
{
"added": [
"\t\t\tif (groups.isEmpty())"
],
"header": "@@ -328,7 +337,7 @@ class LockSpace extends Hashtable {",
"removed": [
"\t\t\tif (isEmpty())"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/locks/SinglePool.java",
"hunks": [
{
"added": [
"\t\treturn ls.areLocksHeld();"
],
"header": "@@ -339,7 +339,7 @@ public class SinglePool extends Hashtable",
"removed": [
"\t\treturn !ls.isEmpty();"
]
}
]
}
] |
derby-DERBY-1704-7b8eea6f
|
DERBY-1704 (cleanup)
* Remove unused imports
* Make classes package private
* Remove check for condition that is always true
* Simplify parameter lists
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@518052 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/locks/LockControl.java",
"hunks": [
{
"added": [],
"header": "@@ -27,7 +27,6 @@ import org.apache.derby.iapi.services.locks.Latch;",
"removed": [
"import java.util.Stack;"
]
},
{
"added": [
"final class LockControl implements Control {"
],
"header": "@@ -42,7 +41,7 @@ import java.util.Map;",
"removed": [
"public class LockControl implements Control {"
]
},
{
"added": [
"\t\tif (!isUnlocked()) {"
],
"header": "@@ -294,7 +293,7 @@ public class LockControl implements Control {",
"removed": [
"\t\tif (!grantLock && !isUnlocked()) {"
]
},
{
"added": [
"\t\taddWaiter(waitingLock, ls);"
],
"header": "@@ -378,7 +377,7 @@ public class LockControl implements Control {",
"removed": [
"\t\taddWaiter(waiting, waitingLock, ls);"
]
},
{
"added": [
"\t\t\tpopFrontWaiter(ls);"
],
"header": "@@ -420,7 +419,7 @@ public class LockControl implements Control {",
"removed": [
"\t\t\tpopFrontWaiter(waiting, ls);"
]
},
{
"added": [
"\t\t\t\tremoveWaiter(removeIndex, ls);",
"\t\t\t\tint count = removeWaiter(item, ls);"
],
"header": "@@ -456,12 +455,12 @@ public class LockControl implements Control {",
"removed": [
"\t\t\t\tremoveWaiter(waiting, removeIndex, ls);",
"\t\t\t\tint count = removeWaiter(waiting, item, ls);"
]
},
{
"added": [
"\t\tint count = removeWaiter(item, ls);"
],
"header": "@@ -507,7 +506,7 @@ public class LockControl implements Control {",
"removed": [
"\t\tint count = removeWaiter(waiting, item, ls);"
]
},
{
"added": [
"\tprivate void addWaiter(Lock lockItem, LockTable ls) {"
],
"header": "@@ -614,13 +613,10 @@ public class LockControl implements Control {",
"removed": [
"\t * @param waiting\tThe list of waiters to add to",
"\tprivate void addWaiter(List waiting,",
"\t\t\t\t\t\tLock lockItem,",
"\t\t\t\t\t\tLockTable ls) {"
]
},
{
"added": [
"\tprivate Object popFrontWaiter(LockTable ls) {",
"\t\treturn removeWaiter(0, ls);"
],
"header": "@@ -632,13 +628,12 @@ public class LockControl implements Control {",
"removed": [
"\t * @param waiting\tThe list of waiters to pop from",
"\tprivate Object popFrontWaiter(List waiting, LockTable ls) {",
"\t\treturn removeWaiter(waiting, 0, ls);"
]
},
{
"added": [
"\tprivate Object removeWaiter(int index, LockTable ls) {"
],
"header": "@@ -646,15 +641,12 @@ public class LockControl implements Control {",
"removed": [
"\t * @param waiting\tThe list of waiters to pop from",
"\tprivate Object removeWaiter(List waiting,",
"\t\t\t\t\t\t\t\tint index,",
"\t\t\t\t\t\t\t\tLockTable ls) {"
]
},
{
"added": [
"\tprivate int removeWaiter(Object item, LockTable ls) {"
],
"header": "@@ -665,15 +657,12 @@ public class LockControl implements Control {",
"removed": [
"\t * @param waiting\tThe list of waiters to pop from",
"\tprivate int removeWaiter(List waiting,",
"\t\t\t\t\t\t\t\tObject item,",
"\t\t\t\t\t\t\t\tLockTable ls) {"
]
}
]
}
] |
derby-DERBY-1704-a67b8777
|
DERBY-1704 (cleanup) Remove unused Hashtable.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@518073 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/locks/LockSet.java",
"hunks": [
{
"added": [],
"header": "@@ -35,7 +35,6 @@ import org.apache.derby.iapi.reference.Property;",
"removed": [
"import java.util.Hashtable;"
]
},
{
"added": [],
"header": "@@ -93,9 +92,6 @@ final class LockSet implements LockTable {",
"removed": [
"\tprivate Hashtable lockTraces; // rather than burden each lock with",
"\t\t\t\t\t\t\t\t // its stack trace, keep a look aside table",
"\t\t\t\t\t\t\t\t // that maps a lock to a stack trace"
]
},
{
"added": [],
"header": "@@ -272,18 +268,9 @@ final class LockSet implements LockTable {",
"removed": [
" if (deadlockTrace)",
" {",
" // we want to keep a stack trace of this thread just before it goes",
" // into wait state, no need to synchronized because Hashtable.put",
" // is synchronized and the new throwable is local to this thread.",
" lockTraces.put(waitingLock, new Throwable());",
" }",
"",
"\t\ttry {"
]
},
{
"added": [],
"header": "@@ -467,15 +454,6 @@ forever:\tfor (;;) {",
"removed": [
" } finally {",
" if (deadlockTrace)",
" {",
" // I am out of the wait state, either I got my lock or I ",
" // am the one who is going to detect the deadlock, don't ",
" // need the stack trace anymore.",
" lockTraces.remove(waitingLock);",
" }",
" }"
]
},
{
"added": [],
"header": "@@ -711,15 +689,6 @@ forever:\tfor (;;) {",
"removed": [
"",
"\t\tif (val && lockTraces == null)",
" {",
"\t\t\tlockTraces = new Hashtable();",
" }",
"\t\telse if (!val && lockTraces != null)",
"\t\t{",
"\t\t\tlockTraces = null;",
"\t\t}"
]
}
]
}
] |
derby-DERBY-1704-fa8c910d
|
DERBY-1704 (partial) Allow more concurrency in the lock manager
* Made LockSet contain a HashMap instead of extending Hashtable.
* Fixed some comments about MT/synchronization.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@498999 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/locks/Deadlock.java",
"hunks": [
{
"added": [
"\t/**",
"\t * Look for a deadlock.",
"\t * <BR>",
"\t * MT - must be synchronized on the <code>LockSet</code> object.",
"\t */"
],
"header": "@@ -46,6 +46,11 @@ class Deadlock {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/locks/LockSet.java",
"hunks": [
{
"added": [],
"header": "@@ -24,7 +24,6 @@ package org.apache.derby.impl.services.locks;",
"removed": [
"import org.apache.derby.iapi.services.monitor.Monitor;"
]
},
{
"added": [
"import java.util.Dictionary;",
"import java.util.HashMap;",
"import java.util.Iterator;",
"import java.util.Map;"
],
"header": "@@ -34,8 +33,12 @@ import org.apache.derby.iapi.services.diag.DiagnosticUtil;",
"removed": []
},
{
"added": [
"\tMT - Mutable - Container Object : All non-private methods of this class are",
"\tthread safe unless otherwise stated by their javadoc comments.",
"\tAll searching of"
],
"header": "@@ -46,10 +49,11 @@ import java.util.Enumeration;",
"removed": [
"\tMT - Mutable - Container Object : Thread Safe",
"\tThe Hashtable we extend is synchronized on this, all addition, searching of"
]
},
{
"added": [
"final class LockSet {",
" /** Hash table which maps <code>Lockable</code> objects to",
" * <code>Lock</code>s. */",
" private final HashMap locks;",
""
],
"header": "@@ -65,13 +69,16 @@ import java.util.Enumeration;",
"removed": [
"public final class LockSet extends Hashtable",
"{"
]
},
{
"added": [
"\tprivate int blockCount;",
"\t\tlocks = new HashMap();"
],
"header": "@@ -92,15 +99,15 @@ public final class LockSet extends Hashtable",
"removed": [
"\tprotected int\tblockCount;",
"\t\tsuper();"
]
},
{
"added": [
"\t\t\t\tif (locks.size() > 1000)",
"\t\t\t\t\tSystem.out.println(\"memoryLeakTrace:LockSet: \" +",
" locks.size());"
],
"header": "@@ -129,8 +136,9 @@ public final class LockSet extends Hashtable",
"removed": [
"\t\t\t\tif (size() > 1000)",
"\t\t\t\t\tSystem.out.println(\"memoryLeakTrace:LockSet: \" + size());"
]
},
{
"added": [
"\t\t\t\tlocks.put(ref, gl);",
"\t\t\t\tlocks.put(ref, control);"
],
"header": "@@ -150,14 +158,14 @@ public final class LockSet extends Hashtable",
"removed": [
"\t\t\t\tput(ref, gl);",
"\t\t\t\tput(ref, control);"
]
},
{
"added": [
"\t\t\t\t\tlocks.remove(control.getLockable());"
],
"header": "@@ -550,7 +558,7 @@ forever:\tfor (;;) {",
"removed": [
"\t\t\t\t\tremove(control.getLockable());"
]
},
{
"added": [
" for (Iterator it = locks.values().iterator(); it.hasNext(); )",
" DiagnosticUtil.toDiagString(it.next());"
],
"header": "@@ -589,12 +597,10 @@ forever:\tfor (;;) {",
"removed": [
" for (Enumeration e = this.elements(); ",
" e.hasMoreElements();",
" i++)",
" DiagnosticUtil.toDiagString(e.nextElement());"
]
},
{
"added": [
" /**",
" * Add all waiters in this lock table to a <code>Dictionary</code> object.",
" * <br>",
" * MT - must be synchronized on this <code>LockSet</code> object.",
" */",
" void addWaiters(Dictionary waiters) {",
" for (Iterator it = locks.values().iterator(); it.hasNext(); ) {",
" Control control = (Control) it.next();",
" control.addWaiters(waiters);",
" }",
" }",
"",
"\tsynchronized Map shallowClone()",
"\t\tHashMap clone = new HashMap();",
"\t\tfor (Iterator it = locks.keySet().iterator(); it.hasNext(); )",
"\t\t\tLockable lockable = (Lockable) it.next();"
],
"header": "@@ -605,18 +611,30 @@ forever:\tfor (;;) {",
"removed": [
"\tsynchronized LockSet shallowClone()",
"\t\tLockSet clone = new LockSet(factory);",
"\t\tfor (Enumeration e = keys(); e.hasMoreElements(); )",
"\t\t\tLockable lockable = (Lockable)e.nextElement();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/locks/LockTableVTI.java",
"hunks": [
{
"added": [
"import java.util.Iterator;",
"import java.util.Map;"
],
"header": "@@ -23,13 +23,13 @@ package org.apache.derby.impl.services.locks;",
"removed": [
"import java.util.Hashtable;",
"import java.util.Vector;"
]
},
{
"added": [
"\tprivate final Iterator outerControl;",
"\tLockTableVTI(Map clonedLockTable)",
"\t\touterControl = clonedLockTable.values().iterator();"
],
"header": "@@ -45,18 +45,15 @@ class LockTableVTI implements Enumeration",
"removed": [
"\tprivate final LockSet clonedLockTable;",
"\tprivate final Enumeration outerControl;",
"\tLockTableVTI(LockSet clonedLockTable)",
"\t\tthis.clonedLockTable = clonedLockTable;",
"",
"\t\touterControl = clonedLockTable.elements();"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/locks/SinglePool.java",
"hunks": [
{
"added": [
"\t\t\tControl control = lockTable.getControl(ref);"
],
"header": "@@ -422,7 +422,7 @@ public class SinglePool extends Hashtable",
"removed": [
"\t\t\tControl control = (Control) lockTable.get(ref);"
]
}
]
}
] |
derby-DERBY-1706-4d04c7bf
|
DERBY-1706
contributed by Mamta Satoor
This fix addresses the null pointer reported in DERBY-1706.
SESSION schema is a special schema which is used for global temporary tables.
In order to handle global temporary table, Derby creates a in-memory SESSION
schema descriptor which does not have any uuid associated with it. A physical
SESSION schema(with it's uuid set properly) will be created *only* if there is
a persistent object created in it by a user. Global temporary tables can only
reside in SESSION schema and Derby documentation recommends that SESSION schema
should not be used for other persistent objects. This is because the same
object name could be referencing different objects within SESSION schema
depending on in what order they got created.
For instance
create table session.t1(c11 int);
-- the following select will get data from the persistent table t1 in SESSION schema
select * from session.t1;
declare global temporary table session.t1(c11 int, c12 int) on commit delete rows not logged;
-- the following select this time will return data from the temporary table t1 rather than persistent table t1
-- This is because, at any time, if there is a global temporary table by the table referenced by a statement,
-- then Derby will always pick up the global temporary table. If no global temporary table by that name is
-- found, then Derby will look for persistent table in SESSION schema. If none found, then error will be thrown
select * from session.t1;
-- following will drop the temporary table t1 and not the persistent table t1
drop table session.t1;
-- the following select will get data from the persistent table t1 in SESSION schema because temporary table
-- doesn't exist anymore
select * from session.t1;
So, as can be seen from the example above, the statements referencing SESSION schema objects could have different meanings depending on what kind of objects exist in SESSION schema. Because of this, the compiled plans of statements referencing SESSION schema are not saved in the statement cache, rather they get compiled everytime they are executed. In order to enforce this, in the compilation phase, Derby checks if the statement at hand is referencing a SESSION schema object by calling referencesSessionSchema method. If this method returns true, the statement's compiled plan will not be saved in the statement cache.
Now, taking the script provided by Yip which results in NPE
set schema session;
create table t1 (i int);
Derby calls referencesSessionSchema while compiling "create table t1 (i int); " to see if it references SESSION schema object. Since, there is no schema associated with the table t1, Derby will check for the compilation schema which in this case is SESSION schema because we used "set schema session; ". (This happens in QueryTreeNode.getSchemaDescriptor(String schemaName, boolean raiseError) line 1486). The method
then tries to call an equal method on the UUID associated with the SESSION schema descriptor but since it is null(because no physical SESSION schema has been created yet), we end up with a null pointer exception. This will happen only if no physical SESSION schema has been created yet and user tries to create a first persistent object inside SESSION schema after doing a set schema session.
Following will not give a NPE because user hand created SESSION schema before doing set schema SESSION and creating an object inside it.
create schema session;
set schema session;
create table t1 (i int);
The hand creation of SESSION schema causes Derby to have a schema descriptor for SESSION schema with it's uuid set correctly.
The fix for the NPE is simple and that is to check if the UUID is null.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@447212 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-171-444aa520
|
Committing changes for DERBY-171 - Need correlation ID in UPDATE/DELETE statements.
Submitted by Rick Hillegas.
Comments from Rick:
I have added the optional correlation name clauses to the UPDATE and DELETE productions in the parser and added supporting bind-time logic. In addition to fixing this particular bug, I have significantly changed the binding of correlated subqueries which have GROUP BY or HAVING clauses: I am now passing the outer fromList context down the subquery binding stack. This makes it possible to bind correlated references in those subqueries and fixes a cluster of other bugs. These other correlated subqueries were failing to compile because the bind logic did not match the way that the parser rewrites the query tree in these cases. I tripped across these problems in the lang/refActions1.sql regression test. I have extensively updated the canon for that test. Looking at the old canon, it appears to me that the old canon was riddled with incorrect results.
Here are some responses to issues which Army raised while reviewing the first rev of this bugfix:
1) This bugfix fixes some other, unlogged bugs. These are basically syntax errors raised by the parser when it encounters correlated references in subqueries which contain GROUP BY or HAVING clauses. The problem was that the parser does something clever. It takes advantage of the fact that the HAVING clause functions like a WHERE clause on the GROUP BY result. The parser then makes the GROUP BY result an outer query with the HAVING clause as its WHERE clause, and the parser then turns the rest of the SELECT into a subquery which feeds the GROUP BY outer query. However, the the binding logic for these rewritten GROUP BY results was not as clever as the parser. Subselects which had GROUP BY or HAVING clauses were not passed the list of correlated tables and so failed to bind correlated references. Perhaps an example would help:
select e.* from employee e
where e.bonus <
( select b.bonus from bonus b where b.empid=e.empid group by bonus having bonus > 3)
In this case, the query would be rewritten to have 3 rather than 2 levels. The outer level would remain like the original. But the subselect would be rewritten to have its own outer select, consisting of the GROUP BY and HAVING clauses and an inner select consisting of the SELECT B.BONUS. In binding this query, level 1 would pass its correlated from list down to level 2, but level 2 would not pass the list on to level 3. However, level 3 needs the correleted from list in order to resolve b.empid=e.empid.
2) This bugfix changes some queries in refActions1.sql. These are queries which used to raise syntax errors.because of the bugs mentioned in (1) above. I first changed these queries by qualifying some dangling references with correlation names. I did this to prove that the syntax errors were not being caused by ambiguity. I verified that the changed queries continued to raise the same syntax errors. Then I fixed the bugs mentioned in (1) above. Most of the queries then successfully compiled. What did the authors of this test hope to demonstrate? It's hard to say since the comments indicate that these statements are supposed to both be correlated and to complete successfully but, of course, they didn't. complete. The changed queries satisfy the minimal contract in the comments: the statements have correlated references and they complete successfully. Do these changes mask other bugs? Possibly. Were those other bugs disclosed by the previous state of the test? No, The changed queries at least track something useful now: syntax that is supposed to compile. I think this is an improvement.
Second rev of bugfix. Incorporates Army's feedback: 1) Removes FromBaseTable.java, which had a vacuous diff, 2) Moves regression tests into update.sql and delete.sql.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@231366 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ColumnReference.java",
"hunks": [
{
"added": [
"\t\t\t\t\"tableName: \" + ( ( tableName != null) ? tableName.toString() : \"null\") + \"\\n\" +"
],
"header": "@@ -142,9 +142,7 @@ public class ColumnReference extends ValueNode",
"removed": [
"\t\t\t\t( ( tableName != null) ?",
"\t\t\t\t\t\ttableName.toString() :",
"\t\t\t\t\t\t\"tableName: null\\n\") +"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/CurrentOfNode.java",
"hunks": [
{
"added": [
"\tpublic void init( Object correlationName, Object cursor, Object tableProperties)",
"\t\tsuper.init(correlationName, tableProperties);"
],
"header": "@@ -98,9 +98,9 @@ public final class CurrentOfNode extends FromTable {",
"removed": [
"\tpublic void init(Object cursor, Object tableProperties)",
"\t\tsuper.init(null, tableProperties);"
]
},
{
"added": [
"\t\tif (",
"\t\t\t (columnsTableName == null) ||",
"\t\t\t (columnsTableName.getFullTableName().equals(baseTableName.getFullTableName())) ||",
"\t\t\t ((correlationName != null) && correlationName.equals( columnsTableName.getTableName()))",
"\t\t )"
],
"header": "@@ -337,7 +337,11 @@ public final class CurrentOfNode extends FromTable {",
"removed": [
"\t\tif (columnsTableName == null || columnsTableName.getFullTableName().equals(baseTableName.getFullTableName()))"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DeleteNode.java",
"hunks": [
{
"added": [
""
],
"header": "@@ -238,6 +238,7 @@ public class DeleteNode extends DMLModStatementNode",
"removed": []
},
{
"added": [
""
],
"header": "@@ -246,6 +247,7 @@ public class DeleteNode extends DMLModStatementNode",
"removed": []
},
{
"added": [
"\t\t\t/* Force the added columns to take on the table's correlation name, if any */",
"\t\t\tcorrelateAddedColumns( resultColumnList, targetTable );",
"\t\t\t"
],
"header": "@@ -285,6 +287,9 @@ public class DeleteNode extends DMLModStatementNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromList.java",
"hunks": [
{
"added": [
"\tpublic void bindExpressions( FromList fromListParam )"
],
"header": "@@ -309,7 +309,7 @@ public class FromList extends QueryTreeNodeVector implements OptimizableList",
"removed": [
"\tpublic void bindExpressions()"
]
},
{
"added": [
"\t\t\tfromTable.bindExpressions( makeFromList( fromListParam, fromTable ) );",
"\t/**",
"\t * Construct an appropriate from list for binding an individual",
"\t * table element. Normally, this is just this list. However,",
"\t * for the special wrapper queries which the parser creates for",
"\t * GROUP BY and HAVING clauses, the appropriate list is the",
"\t * outer list passed into us--it will contain the appropriate",
"\t * tables needed to resolve correlated columns.",
"\t */",
"\tprivate\tFromList\tmakeFromList( FromList fromListParam, FromTable fromTable )",
"\t{",
"\t\tif ( fromTable instanceof FromSubquery )",
"\t\t{",
"\t\t\tFromSubquery\tfromSubquery = (FromSubquery) fromTable;",
"",
"\t\t\tif ( fromSubquery.generatedForGroupByClause || fromSubquery.generatedForHavingClause )",
"\t\t\t{ return fromListParam; }",
"\t\t}",
"",
"\t\treturn this;",
"\t}",
"\t"
],
"header": "@@ -318,10 +318,31 @@ public class FromList extends QueryTreeNodeVector implements OptimizableList",
"removed": [
"\t\t\tfromTable.bindExpressions(this);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumn.java",
"hunks": [
{
"added": [
"\t * See UpdateNode.scrubResultColumns() for full explaination."
],
"header": "@@ -267,7 +267,7 @@ public class ResultColumn extends ValueNode",
"removed": [
"\t * See UpdateNode for full explaination."
]
},
{
"added": [
""
],
"header": "@@ -764,6 +764,7 @@ public class ResultColumn extends ValueNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UpdateNode.java",
"hunks": [
{
"added": [
""
],
"header": "@@ -225,6 +225,7 @@ public final class UpdateNode extends DMLModStatementNode",
"removed": []
},
{
"added": [
"\t\tnormalizeCorrelatedColumns( resultSet.resultColumns, targetTable );"
],
"header": "@@ -341,6 +342,7 @@ public final class UpdateNode extends DMLModStatementNode",
"removed": []
},
{
"added": [
"\t\t/*",
"\t\t * The last thing that we do to the generated RCL is to clear",
"\t\t * the table name out from each RC. See comment on scrubResultColumns().",
"\t\tscrubResultColumns( resultColumnList );"
],
"header": "@@ -496,17 +498,11 @@ public final class UpdateNode extends DMLModStatementNode",
"removed": [
"\t\t/* The last thing that we do to the generated RCL is to clear",
"\t\t * the table name out from each RC. The table name is",
"\t\t * unnecessary for an update. More importantly, though, it",
"\t\t * creates a problem in the degenerate case with a positioned",
"\t\t * update. The user must specify the base table name for a",
"\t\t * positioned update. If a correlation name was specified for",
"\t\t * the cursor, then a match for the ColumnReference would not",
"\t\t * be found if we didn't null out the name. (Aren't you",
"\t\t * glad you asked?)",
"\t\tresultColumnList.clearTableNames();"
]
}
]
}
] |
derby-DERBY-1710-6f4ffc7f
|
DERBY-1710: Unchecked casts from SQLException to EmbedSQLException
cause ClassCastException in NetworkServerControlImpl when running
Java SE 6
The attached patch makes NetworkServerControlImpl use
SQLException.getSQLState() instead of EmbedSQLException.getMessageId()
where possible. Where it is not possible, check that the exception is
EmbedSQLException before casting, and fall back to a more generic
approach if it is not.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@432493 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.error.StandardException;"
],
"header": "@@ -50,6 +50,7 @@ import java.util.StringTokenizer;",
"removed": []
},
{
"added": [
"\t\t\t\tString expectedState =",
"\t\t\t\t\tStandardException.getSQLStateFromIdentifier(",
"\t\t\t\t\t\t\tSQLState.CLOUDSCAPE_SYSTEM_SHUTDOWN);",
"\t\t\t\tif (!expectedState.equals(sqle.getSQLState())) {",
"\t\t\t\t}"
],
"header": "@@ -662,10 +663,13 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\t\tif (((EmbedSQLException)sqle).getMessageId() !=",
"\t\t\t\t SQLState.CLOUDSCAPE_SYSTEM_SHUTDOWN)"
]
},
{
"added": [
"\t\t\tif (currentSession != null && currentSession.langUtil != null &&",
"\t\t\t\tse instanceof EmbedSQLException)"
],
"header": "@@ -1555,7 +1559,8 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\tif (currentSession != null && currentSession.langUtil != null)"
]
},
{
"added": [
"\t\t\tString expectedState =",
"\t\t\t\tStandardException.",
"\t\t\t\t\tgetSQLStateFromIdentifier(SQLState.SHUTDOWN_DATABASE);",
"\t\t\tif (!expectedState.equals(se.getSQLState()))"
],
"header": "@@ -3235,7 +3240,10 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t\tif (!(((EmbedSQLException)se).getMessageId() == SQLState.SHUTDOWN_DATABASE))"
]
}
]
}
] |
derby-DERBY-1714-8aff1cda
|
DERBY-766 DERBY-1714 Working method in CodeChunk that splits expressions out of generated methods that are too large.
Bumps the number of unions supported in largeCodeGen to over 6,000 from around 800. Also increases the
number of rows supported in a VALUES clause. A large number of UNION clauses still requires a large amount of
memory for optimization (see DERBY-1315). A large number of rows in a VALUES clause fails at some point due to
a StackOverflow. Subsequent commit will modify largeCodeGen to be a JUnit test and adapt to these changes
but running into issues finding a useful workign limits that can produce repeatable results without
hitting memory issues.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@432856 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/BCMethod.java",
"hunks": [
{
"added": [
" "
],
"header": "@@ -74,7 +74,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
""
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/CodeChunk.java",
"hunks": [
{
"added": [
" ",
" int splitMinLength = splitMinLength(mb);",
" "
],
"header": "@@ -1285,6 +1285,9 @@ final class CodeChunk {",
"removed": []
},
{
"added": [
" // worth it.",
" if (possibleSplitLength <= splitMinLength)"
],
"header": "@@ -1374,11 +1377,8 @@ final class CodeChunk {",
"removed": [
" // worth it. 100 is an arbitary number,",
" // a real low limit would be the number of",
" // bytes of instructions required to call",
" // the sub-method, four I think.",
" if (possibleSplitLength < 100)"
]
},
{
"added": [
" final int splitExpressionOut(final BCMethod mb, final ClassHolder ch,",
" final int maxStack)",
" String bestSplitRT = null; ",
" ",
" int splitMinLength = splitMinLength(mb);"
],
"header": "@@ -1718,14 +1718,16 @@ final class CodeChunk {",
"removed": [
" final int splitExpressionOut(BCMethod mb, ClassHolder ch,",
" int maxStack)",
" String bestSplitRT = null; "
]
},
{
"added": [
" ",
" // TODO: This conditional handling was copied",
" // from splitZeroStack, haven't looked in detail",
" // to see how a conditional should be handled",
" // with an expression split. So for the time",
" // being just bail.",
" if (true)",
" return -1;",
""
],
"header": "@@ -1781,6 +1783,15 @@ final class CodeChunk {",
"removed": []
},
{
"added": [
" // no plan to split here though, as we are only",
" // splitting methods that return a reference.",
" selfContainedBlockStart = -1;",
" // earliestIndepPC[stack + 1];"
],
"header": "@@ -1920,8 +1931,10 @@ final class CodeChunk {",
"removed": [
" selfContainedBlockStart =",
" earliestIndepPC[stack + 1];"
]
},
{
"added": [
" // no plan to split here though, as we are only",
" // splitting methods that return a reference.",
" selfContainedBlockStart = -1;",
" ",
" // top two words depend on the objectref",
" // which was at the same depth of the first word",
" // of the 64 bit value.",
" earliestIndepPC[stack] =",
" if (blockLength <= splitMinLength)",
" // No point splitting, too small",
" }",
" else if (blockLength > (VMOpcode.MAX_CODE_LENGTH - 1))",
" {",
" // too big to split into a single method",
" // (one for the return opcode)",
" }",
" else",
" {",
" // Only split for a method that returns",
" // an class reference.",
" int me = vmDescriptor.lastIndexOf(')');",
" ",
" if (vmDescriptor.charAt(me+1) == 'L')",
" String rt = vmDescriptor.substring(me + 2,",
" vmDescriptor.length() - 1);",
" ",
" // convert to external format.",
" rt = rt.replace('/', '.');",
" ",
" if (blockLength >= optimalMinLength)",
" {",
" // Split now!",
" BCMethod subMethod = startSubMethod(mb,",
" rt, selfContainedBlockStart,",
" blockLength);",
" ",
" return splitCodeIntoSubMethod(mb, ch, subMethod,",
" selfContainedBlockStart, blockLength); ",
" } ",
" else if (blockLength > bestSplitBlockLength)",
" {",
" // Save it, may split at this point",
" // if nothing better seen.",
" bestSplitPC = selfContainedBlockStart;",
" bestSplitBlockLength = blockLength;",
" bestSplitRT = rt;",
" }"
],
"header": "@@ -1933,47 +1946,62 @@ final class CodeChunk {",
"removed": [
" selfContainedBlockStart = earliestIndepPC[stack] =",
"",
" // Only split for a method that returns",
" // an class reference.",
" int me = vmDescriptor.lastIndexOf(')');",
" if (vmDescriptor.charAt(me+1) == 'L')",
" String rt = vmDescriptor.substring(me + 2,",
" vmDescriptor.length() - 1);",
" ",
" if (blockLength > (VMOpcode.MAX_CODE_LENGTH - 1))",
" {",
" // too big to split into a single method",
" // (one for the return opcode)",
" } ",
" else if (blockLength >= optimalMinLength)",
" {",
" // Split now!",
" System.out.println(\"NOW \" + blockLength",
" + \" @ \" + selfContainedBlockStart);",
" BCMethod subMethod = startSubMethod(mb,",
" rt, selfContainedBlockStart,",
" blockLength);",
"",
" return splitCodeIntoSubMethod(mb, ch, subMethod,",
" selfContainedBlockStart, blockLength); ",
" } ",
" else if (blockLength > bestSplitBlockLength)",
" // Save it, may split at this point",
" // if nothing better seen. ",
" bestSplitPC = selfContainedBlockStart;",
" bestSplitBlockLength = blockLength;",
" bestSplitRT = rt;"
]
},
{
"added": [
"",
" if (bestSplitBlockLength != -1) {",
" ",
" bestSplitPC, bestSplitBlockLength); ",
" "
],
"header": "@@ -1983,19 +2011,16 @@ final class CodeChunk {",
"removed": [
" if (bestSplitBlockLength > 100)",
" {",
" System.out.println(\"BEST \" + bestSplitBlockLength",
" + \" @ \" + bestSplitPC);",
"",
" bestSplitBlockLength, bestSplitBlockLength); ",
" ",
" "
]
},
{
"added": [
" /**",
" * Minimum split length for a sub-method. If the number of",
" * instructions to call the sub-method exceeds the length",
" * of the sub-method, then there's no point splitting.",
" * The number of bytes in the code stream to call",
" * a generated sub-method can take is based upon the number of method args.",
" * A method can have maximum of 255 words of arguments (section 4.10 JVM spec)",
" * which in the worst case would be 254 (one-word) parameters",
" * and this. For a sub-method the arguments will come from the",
" * parameters to the method, i.e. ALOAD, ILOAD etc.",
" * <BR>",
" * This leads to this number of instructions.",
" * <UL>",
" * <LI> 4 - 'this' and first 3 parameters have single byte instructions",
" * <LI> (N-4)*2 - Remaining parameters have two byte instructions",
" * <LI> 3 for the invoke instruction.",
" * </UL>",
" */",
" private static int splitMinLength(BCMethod mb) {",
" int min = 1 + 3; // For ALOAD_0 (this) and invoke instruction",
" ",
" if (mb.parameters != null) {",
" int paramCount = mb.parameters.length;",
" ",
" min += paramCount;",
" ",
" if (paramCount > 3)",
" min += (paramCount - 3);",
" }",
" ",
" return min;",
" }"
],
"header": "@@ -2020,6 +2045,38 @@ final class CodeChunk {",
"removed": []
}
]
}
] |
derby-DERBY-1714-b1397ecd
|
DERBY-766 DERBY-1714 Convert largeCodeGen to a JUnit test, add it to the lang._Suite and add that to
the derbylang.runall old harness suite. Added tests for insert a large number of rows with a VALUES
clause. Test needs further improvements due to errors from DERBY-1315 and stack overflow with
a large INSERT VALUES clause.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@433085 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/JDBC.java",
"hunks": [
{
"added": [
"\t * Provides simple testing of the ResultSet when the contents"
],
"header": "@@ -299,7 +299,7 @@ public class JDBC {",
"removed": [
"\t * Provides simple testing of the ResultSet when then contents"
]
},
{
"added": [
" ",
" /**",
" * Assert a SQL state is the expected value.",
" * @param expected Expected SQLState.",
" * @param sqle SQLException caught",
" */",
" public static void assertSQLState(String expected, SQLException sqle)",
" {",
" Assert.assertEquals(\"Unexpected SQL State\", expected, sqle.getSQLState());",
" }"
],
"header": "@@ -318,6 +318,16 @@ public class JDBC {",
"removed": []
}
]
}
] |
derby-DERBY-1716-8af8676c
|
DERBY-1716
contributed by Yip Ng
patch: derby1716-trunk-diff03.txt
Unlike other descriptors, when privilege(s) get revoked from user,
the statement is not subject to recompilation, so then we are back to square one
since the previous patch attempts to bring in the permission descriptor(s) into
the permission cache at compilation time to avoid reading from system tables at
execution time.
I believe the proper proposal fix is to use internal nested read-only transaction
when the system is reading permission descriptors from the system tables. At a
high level, a statement undergoes the following typical steps for it to get executed
by the system:
1. Statement Compilation Phase
a) Parse the statement
b) Bind the statement and collects required permissions for it to be executed.
c) Optimize the statement
d) Generate the activation for the statement
2. Statement Execution Phase
a) Check if the authoration id has the required privileges to execute the statement.
b) Execute the statement
The problem lies in permissions checking step at statement execution phase. Before a statement can be executed in SQL authorization mode, the authorization id's privileges needs to be check against the permission cache or if the privileges are not available in the cache, the system needs to read this metadata information from the system tables. But the system is using *user transaction* to do this, so the shared locks that got acquired by the user transaction may not get released immediately; therefore, leading to lock timeout when the grantor attempts to revoke the user's privilege. To resolve this issue, the system now will start an internal read-only nested transaction(same lock space as the parent transaction) to read permission related info from the system tables and release the shared locks
as soon as the permissions check is completed before statement execution. This tackles the root of the stated problem.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@453935 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/conn/GenericAuthorizer.java",
"hunks": [
{
"added": [
" if( requiredPermissionsList != null && ",
" !requiredPermissionsList.isEmpty() && ",
" int ddMode = dd.startReading(lcc);",
" ",
" /*",
" * The system may need to read the permission descriptor(s) ",
" * from the system table(s) if they are not available in the ",
" * permission cache. So start an internal read-only nested ",
" * transaction for this.",
" * ",
" * The reason to use a nested transaction here is to not hold",
" * locks on system tables on a user transaction. e.g.: when",
" * attempting to revoke an user, the statement may time out",
" * since the user-to-be-revoked transaction may have acquired ",
" * shared locks on the permission system tables; hence, this",
" * may not be desirable. ",
" * ",
" * All locks acquired by StatementPermission object's check()",
" * method will be released when the system ends the nested ",
" * transaction.",
" * ",
" * In Derby, the locks from read nested transactions come from",
" * the same space as the parent transaction; hence, they do not",
" * conflict with parent locks.",
" */ ",
" lcc.beginNestedTransaction(true);",
" \t",
" try ",
" try ",
" {",
" \t// perform the permission checking",
" for (Iterator iter = requiredPermissionsList.iterator(); ",
" iter.hasNext();) ",
" {",
" ((StatementPermission) iter.next()).check(lcc, ",
" authorizationId, false);",
" }",
" } ",
" finally ",
" {",
" dd.doneReading(ddMode, lcc);",
" }",
" } ",
" finally ",
" {",
" \t// make sure we commit; otherwise, we will end up with ",
" \t// mismatch nested level in the language connection context.",
" lcc.commitNestedTransaction();",
" }",
" }",
" }"
],
"header": "@@ -150,17 +150,61 @@ implements Authorizer",
"removed": [
" if( requiredPermissionsList != null && ! requiredPermissionsList.isEmpty() && ",
" for( Iterator iter = requiredPermissionsList.iterator();",
" iter.hasNext();)",
" ((StatementPermission) iter.next()).check( lcc, authorizationId, false);",
" } ",
"\t\t}",
"\t}"
]
}
]
}
] |
derby-DERBY-1718-a4656283
|
DERBY-1718 ( creating an after insert trigger with trigger action involving
xml datatype throws java.io.NottSerializableException)
Patch contributed by Yip Ng.
The fix basically implements the Formatable interface for SqlXmlUtil class.
Currently, it writes out the query expression string instead of the XPath
object(its serializable I think), and then later recompiles the query once
at evaluation time. The reason behind this is that I don't want the stored
form to be tied to a particular XML implementation.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@448085 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/io/StoredFormatIds.java",
"hunks": [
{
"added": [
" /**",
" \tclass org.apache.derby.iapi.types.SqlXmlUtil",
" */",
" static public final int SQL_XML_UTIL_V01_ID =",
" (MIN_ID_2 + 464);",
" "
],
"header": "@@ -490,6 +490,12 @@ public interface StoredFormatIds {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SqlXmlUtil.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.io.Formatable;",
"import org.apache.derby.iapi.services.io.StoredFormatIds;",
"import java.io.IOException;",
"import java.io.ObjectOutput;",
"import java.io.ObjectInput;"
],
"header": "@@ -23,11 +23,16 @@ package org.apache.derby.iapi.types;",
"removed": []
},
{
"added": [
"public class SqlXmlUtil implements Formatable"
],
"header": "@@ -109,7 +114,7 @@ import org.apache.xalan.templates.OutputProperties;",
"removed": [
"public class SqlXmlUtil "
]
},
{
"added": [
" // Used to recompile the XPath expression when this formatable",
" // object is reconstructed. e.g.: SPS ",
" private String queryExpr;",
" private String opName;",
" private boolean recompileQuery;",
" "
],
"header": "@@ -124,6 +129,12 @@ public class SqlXmlUtil",
"removed": []
},
{
"added": [
" ",
" this.queryExpr = queryExpr;",
" this.opName = opName;",
" this.recompileQuery = false;"
],
"header": "@@ -256,6 +267,10 @@ public class SqlXmlUtil",
"removed": []
},
{
"added": [
" // if this object is in an SPS, we need to recompile the query",
" if (recompileQuery)",
" {",
" \tcompileXQExpr(queryExpr, opName);",
" }",
""
],
"header": "@@ -510,6 +525,12 @@ public class SqlXmlUtil",
"removed": []
}
]
}
] |
derby-DERBY-1729-fec62a7b
|
DERBY-1729, contributed by Yip Ng
committing derby1729-trunk-diff03.txt patch.
The GrantNode and RevokeNode should have derived from DDLStatementNode instead
of MiscellaneousStatementNode. Subclassing DDLStatementNode will generate a
call to GenericResultSetFactory's getDDLResultSet() in the activation class
and invokes the GenericAuthorizer's authorize() method with the proper
parameters to enforce the correct semantics.
public ResultSet getDDLResultSet (Activation activation) throws StandardExceptio
n
{
getAuthorizer(activation).authorize(activation, Authorizer.SQL_DDL_OP);
return getMiscResultSet( activation);
}
Also adding an additional sql file for derbylang. The reason I didn't include
this in grantRevokeDDL.sql is because of name collision and this testcase is
one of the many additonal grant/revoke tests that I wrote and I'll like to
append the rest of those testcases to this file(grantRevokeDDL2.sql) when I
submit my patch for DERBY-1736.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@441140 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1732-665a8b9a
|
DERBY-1732 1. Make change to GenericStatementContext.isLastHandler() so it will return false for JVM errors thus
allowing the outer contexts to take corrective action.
2. Store transaction context treats JVM errors as session severity. To ensure consistency,
map severity for non StandardException instances to be SESSION_SEVERITY in GenericLanguageContext,
and GenericStatementContext.
Patch contributed by Sunitha Kambhampati [email protected]
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@453395 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/conn/GenericStatementContext.java",
"hunks": [
{
"added": [
"\t\t** session severity. It is probably an unexpected",
" ** Store layer treats JVM error as session severity, ",
" ** hence to be consistent and to avoid getting rawstore",
" ** protocol violation errors, we treat java errors here",
" ** to be of session severity. ",
" */",
"\t\t\tExceptionSeverity.SESSION_SEVERITY;"
],
"header": "@@ -498,12 +498,16 @@ final class GenericStatementContext",
"removed": [
"\t\t** xact severity. It is probably an unexpected",
"\t\t*/",
"\t\t\tExceptionSeverity.STATEMENT_SEVERITY;"
]
},
{
"added": [
" // For JVM errors, severity gets mapped to ",
" // ExceptionSeverity.NO_APPLICABLE_SEVERITY",
" // in ContextManager.cleanupOnError. It is necessary to ",
" // let outer contexts take corrective action for jvm errors, so ",
" // return false as this will not be the last handler for such ",
" // errors.",
"\t\treturn inUse && !rollbackParentContext && ",
" ( severity == ExceptionSeverity.STATEMENT_SEVERITY );"
],
"header": "@@ -592,8 +596,14 @@ final class GenericStatementContext",
"removed": [
"\t\treturn inUse && !rollbackParentContext && ((severity == ExceptionSeverity.STATEMENT_SEVERITY) ||",
"\t\t\t\t\t\t(severity == ExceptionSeverity.NO_APPLICABLE_SEVERITY));"
]
}
]
}
] |
derby-DERBY-1742-4b3350cb
|
DERBY-1734 (partial) Change SYSALIASESRowFactory to use the utility methods to obtain SystemColumn implementations
to avoid passing redundant parameters leading to bugs (see DERBY-1742). Fix the bug described by DERBY-1742
so that the column descriptor for SYSTEMALIAS BOOLEAN column is created correctly. Remove the calls to
convert the case for the SYSTEMALIASES columns as the system tables are an implementation detail of
Derby which is fixed at upper case.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@433434 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/SYSALIASESRowFactory.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.JDBC30Translation;"
],
"header": "@@ -22,6 +22,7 @@",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/SystemColumnImpl.java",
"hunks": [
{
"added": [
" ",
" /**",
" * Create a system column for a builtin type.",
" * ",
" * @param name",
" * name of column",
" * @param jdbcTypeId",
" * JDBC type id from java.sql.Types",
" * @param nullability",
" * Whether or not column accepts nulls.",
" */",
" static SystemColumn getColumn(String name, int jdbcTypeId,",
" boolean nullability,int maxLength) {",
" return new SystemColumnImpl(name, DataTypeDescriptor",
" .getBuiltInDataTypeDescriptor(jdbcTypeId, nullability, maxLength));",
" }",
" "
],
"header": "@@ -62,7 +62,23 @@ class SystemColumnImpl implements SystemColumn",
"removed": [
""
]
}
]
}
] |
derby-DERBY-1746-b4fdbf81
|
DERBY-1746 - removing svn:externals property from <trunk>/tools/testing, adjusting upgrade tests to by default attempt to access https://svn.apache.org/repos/asf/db/derby/jars for the older version's derby.jar files; removing dependency on 'lib' in old version's directory structure. Also adjusted comments.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@538724 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1748-52121014
|
DERBY-1748: Global case insensitive setting
Patch contributed by Gunnar Grim.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@929111 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/DataTypeDescriptor.java",
"hunks": [
{
"added": [
" * Obtain the collation type from a collation property value.",
"\t * @return The collation type, or -1 if not recognized.",
" */",
"\tpublic static int getCollationType(String collationName)",
"\t{",
"\t\tif (collationName.equalsIgnoreCase(Property.UCS_BASIC_COLLATION))",
"\t\t\treturn StringDataValue.COLLATION_TYPE_UCS_BASIC;",
"\t\telse if (collationName.equalsIgnoreCase(Property.TERRITORY_BASED_COLLATION))",
"\t\t\treturn StringDataValue.COLLATION_TYPE_TERRITORY_BASED;",
"\t\telse if (collationName.equalsIgnoreCase(Property.TERRITORY_BASED_PRIMARY_COLLATION))",
"\t\t\treturn StringDataValue.COLLATION_TYPE_TERRITORY_BASED_PRIMARY;",
"\t\telse if (collationName.equalsIgnoreCase(Property.TERRITORY_BASED_SECONDARY_COLLATION))",
"\t\t\treturn StringDataValue.COLLATION_TYPE_TERRITORY_BASED_SECONDARY;",
"\t\telse if (collationName.equalsIgnoreCase(Property.TERRITORY_BASED_TERTIARY_COLLATION))",
"\t\t\treturn StringDataValue.COLLATION_TYPE_TERRITORY_BASED_TERTIARY;",
"\t\telse if (collationName.equalsIgnoreCase(Property.TERRITORY_BASED_IDENTICAL_COLLATION))",
"\t\t\treturn StringDataValue.COLLATION_TYPE_TERRITORY_BASED_IDENTICAL;",
"\t\telse",
"\t\t\treturn -1;",
"\t}",
""
],
"header": "@@ -1090,13 +1090,27 @@ public final class DataTypeDescriptor implements Formatable",
"removed": [
"\t * Gets the name of this datatype.",
" * <p>",
" * Used to generate strings decribing collation type for error messages.",
"\t * ",
"\t *",
"\t * @return\tthe name of the collation being used in this type.",
"\t */"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/DataValueFactoryImpl.java",
"hunks": [
{
"added": [
"\t \t\t\tproperties.getProperty(Attribute.COLLATION);",
"\t\t\t\t\tint collationType = DataTypeDescriptor.getCollationType(userDefinedCollation);",
"\t\t\t\t\tif (collationType != StringDataValue.COLLATION_TYPE_UCS_BASIC) {",
"\t\t\t\t\t\tif (collationType >= StringDataValue.COLLATION_TYPE_TERRITORY_BASED",
"\t\t\t\t\t\t\t\t&& collationType < StringDataValue.COLLATION_TYPE_TERRITORY_BASED_IDENTICAL) {",
"\t\t\t\t\t\t\tint strength = collationType - StringDataValue.COLLATION_TYPE_TERRITORY_BASED_PRIMARY;",
"\t\t\t\t\t\t\tcollatorForCharacterTypes = verifyCollatorSupport(strength);",
"\t\t\t\t\t\t} else",
"\t\t\t\t\t\t\tthrow StandardException.newException(SQLState.INVALID_COLLATION, userDefinedCollation);",
"\t\t\t\t\t}"
],
"header": "@@ -150,13 +150,17 @@ abstract class DataValueFactoryImpl implements DataValueFactory, ModuleControl",
"removed": [
"\t \t\t\tproperties.getProperty(Attribute.COLLATION);\t\t",
"\t \t\t\tif (!userDefinedCollation.equalsIgnoreCase(Property.UCS_BASIC_COLLATION)",
"\t \t\t\t\t\t&& !userDefinedCollation.equalsIgnoreCase(Property.TERRITORY_BASED_COLLATION))",
"\t \t\t\t\tthrow StandardException.newException(SQLState.INVALID_COLLATION, userDefinedCollation);",
"\t \t\t\tif (userDefinedCollation.equalsIgnoreCase(Property.TERRITORY_BASED_COLLATION))",
"\t \t\t\t\tcollatorForCharacterTypes = verifyCollatorSupport();"
]
},
{
"added": [
"\t\t\t//\tCalculate the collator strength. COLLATION_TYPE_TERRITORY_BASED use strength -1, i e unspecified.",
"\t\t\tint strength = collationType - StringDataValue.COLLATION_TYPE_TERRITORY_BASED_PRIMARY;",
" \t\tcollatorForCharacterTypes = verifyCollatorSupport(strength);"
],
"header": "@@ -1047,7 +1051,9 @@ abstract class DataValueFactoryImpl implements DataValueFactory, ModuleControl",
"removed": [
" \t\tcollatorForCharacterTypes = verifyCollatorSupport();"
]
},
{
"added": [
" *",
"\t * @param strength Collator strength or -1 for locale default.",
" private RuleBasedCollator verifyCollatorSupport(int strength)"
],
"header": "@@ -1055,11 +1061,12 @@ abstract class DataValueFactoryImpl implements DataValueFactory, ModuleControl",
"removed": [
" * ",
" private RuleBasedCollator verifyCollatorSupport() "
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/Decorator.java",
"hunks": [
{
"added": [
" /**",
" * Decorate a set of tests to use an single",
" * use database with TERRITORY_BASED:SECONDARY collation",
" * set to the passed in locale. Database is created",
" * by the setUp method of the decorator.",
" * @param locale Locale used to set territory JDBC attribute. If null",
" * then only collation=TERRITORY_BASED:SECONDARY will be set.",
" */",
" public static Test territoryCollatedCaseInsensitiveDatabase(Test test, final String locale)",
" {",
"",
" String attributes = \"collation=TERRITORY_BASED:SECONDARY\";",
"",
" if (locale != null)",
" attributes = attributes.concat(\";territory=\" + locale);",
"",
" return attributesDatabase(attributes, test);",
" }",
""
],
"header": "@@ -156,6 +156,25 @@ public class Decorator {",
"removed": []
}
]
}
] |
derby-DERBY-1748-abe46d01
|
DERBY-1748 (partial) Removed unused collation code
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@922682 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/services/i18n/LocaleFinder.java",
"hunks": [
{
"added": [],
"header": "@@ -25,7 +25,6 @@ import org.apache.derby.iapi.error.StandardException;",
"removed": [
"import java.text.RuleBasedCollator;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/db/BasicDatabase.java",
"hunks": [
{
"added": [],
"header": "@@ -88,8 +88,6 @@ import java.util.Enumeration;",
"removed": [
"import java.text.Collator;",
"import java.text.RuleBasedCollator;"
]
},
{
"added": [],
"header": "@@ -127,7 +125,6 @@ public class BasicDatabase implements ModuleControl, ModuleSupportable, Property",
"removed": [
"\tprivate RuleBasedCollator ruleBasedCollator;"
]
},
{
"added": [],
"header": "@@ -495,23 +492,6 @@ public class BasicDatabase implements ModuleControl, ModuleSupportable, Property",
"removed": [
"\t/** @exception StandardException\tThrown on error */",
"\tpublic RuleBasedCollator getCollator() throws StandardException {",
"\t\tRuleBasedCollator retval = ruleBasedCollator;",
"",
"\t\tif (retval == null) {",
"\t\t\tif (databaseLocale != null) {",
"\t\t\t\tretval = ruleBasedCollator =",
"\t\t\t\t\t(RuleBasedCollator) Collator.getInstance(databaseLocale);",
"\t\t\t} else {",
"\t\t\t\tthrow noLocale();",
"\t\t\t}",
"\t\t}",
"",
"\t\treturn retval;",
"\t}",
"",
""
]
}
]
}
] |
derby-DERBY-1751-953604b4
|
DERBY-1751: derbynet/testSecMec.java fails with ShutdownException in
DerbyNetClient framework
The attached patch avoids the problem seen in this issue by setting
the console output of the network server to a file. This change in
made to the following files:
* functionTests/tests/jdbc4/TestConnectionMethods.java
* functionTests/tests/derbynet/testSecMec.java
* functionTests/tests/derbynet/dataSourcePermissions_net.java
* junit/NetworkServerTestSetup.java
It also changes DRDAProtocolException so that agent Errors will be
printed to the network server console instead of System.out.
Contributed by Fernanda Pizzorno.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@449671 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/NetworkServerTestSetup.java",
"hunks": [
{
"added": [
"import java.io.FileNotFoundException;",
"import java.io.FileOutputStream;",
"import java.security.AccessController;",
"import java.security.PrivilegedAction;"
],
"header": "@@ -19,8 +19,12 @@",
"removed": []
},
{
"added": [
" private FileOutputStream serverOutput;",
" "
],
"header": "@@ -38,6 +42,8 @@ import org.apache.derby.drda.NetworkServerControl;",
"removed": []
},
{
"added": [
"",
" ",
" serverOutput = (FileOutputStream)",
" AccessController.doPrivileged(new PrivilegedAction() {",
" public Object run() {",
" String fileName = System.getProperty(\"derby.system.home\") + ",
" \"serverConsoleOutput.log\";",
" FileOutputStream fos = null;",
" try {",
" fos = (new FileOutputStream(fileName));",
" } catch (FileNotFoundException ex) {",
" ex.printStackTrace();",
" }",
" return fos;",
" }",
" });",
"",
" networkServerController.start(new PrintWriter(serverOutput));"
],
"header": "@@ -54,10 +60,27 @@ final public class NetworkServerTestSetup extends TestSetup {",
"removed": [
" networkServerController.start(null);"
]
},
{
"added": [
" serverOutput.close();"
],
"header": "@@ -81,6 +104,7 @@ final public class NetworkServerTestSetup extends TestSetup {",
"removed": []
}
]
}
] |
derby-DERBY-1755-6c1fe080
|
DERBY-1756
patch Derby1756.2.diff.txt contributed by Sunitha Kambhampati
with derby-962 changes, if client jvm supports EUSRIDPWD then the client would
use EUSRIDPWD as the security mechanism. But it is possible that the server jvm
might not support EUSRIDPWD and the connection can fail.
When DERBY-1517, DERBY-1755 is fixed, there might be a way to use EUSRIDPWD
when both client and server vm's have support for it.
This patch does the following:
o Do not use EUSRIDPWD as the default security mechanism even if the client vm can support it.
o Fix comments in testSecMec.java.
o Existing tests in testSecMec.java cover this codepath and the master file
output reflects the changes made. Note, only the ibm14 client master file has
changed since only ibm141 and greater jvms come with jce that can support
eusridpwd.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@439775 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/jdbc/ClientBaseDataSource.java",
"hunks": [
{
"added": [
" * 2. if password is available,then USRIDPWD is returned."
],
"header": "@@ -277,10 +277,7 @@ public abstract class ClientBaseDataSource implements Serializable, Referenceabl",
"removed": [
" * 2. if password is available, if client supports EUSRIDPWD, then EUSRIDPWD is ",
" * returned",
" * 3. if password is available, if client does not support EUSRIDPWD, then",
" * USRIDPWD is returned."
]
},
{
"added": [
" /*",
" // -----------------------",
" // PLEASE NOTE: ",
" // When DERBY-1517, DERBY-1755 is fixed, there might be a way to use EUSRIDPWD ",
" // when both client and server vm's have support for it. Hence the below",
" // if statement is commented out."
],
"header": "@@ -291,14 +288,16 @@ public abstract class ClientBaseDataSource implements Serializable, Referenceabl",
"removed": [
" // if password is available, then a security mechanism is picked in",
" // following order if support is available.",
" // 1. EUSRIDPWD",
" // 2. USRIDPWD"
]
}
]
}
] |
derby-DERBY-1756-6c1fe080
|
DERBY-1756
patch Derby1756.2.diff.txt contributed by Sunitha Kambhampati
with derby-962 changes, if client jvm supports EUSRIDPWD then the client would
use EUSRIDPWD as the security mechanism. But it is possible that the server jvm
might not support EUSRIDPWD and the connection can fail.
When DERBY-1517, DERBY-1755 is fixed, there might be a way to use EUSRIDPWD
when both client and server vm's have support for it.
This patch does the following:
o Do not use EUSRIDPWD as the default security mechanism even if the client vm can support it.
o Fix comments in testSecMec.java.
o Existing tests in testSecMec.java cover this codepath and the master file
output reflects the changes made. Note, only the ibm14 client master file has
changed since only ibm141 and greater jvms come with jce that can support
eusridpwd.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@439775 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/jdbc/ClientBaseDataSource.java",
"hunks": [
{
"added": [
" * 2. if password is available,then USRIDPWD is returned."
],
"header": "@@ -277,10 +277,7 @@ public abstract class ClientBaseDataSource implements Serializable, Referenceabl",
"removed": [
" * 2. if password is available, if client supports EUSRIDPWD, then EUSRIDPWD is ",
" * returned",
" * 3. if password is available, if client does not support EUSRIDPWD, then",
" * USRIDPWD is returned."
]
},
{
"added": [
" /*",
" // -----------------------",
" // PLEASE NOTE: ",
" // When DERBY-1517, DERBY-1755 is fixed, there might be a way to use EUSRIDPWD ",
" // when both client and server vm's have support for it. Hence the below",
" // if statement is commented out."
],
"header": "@@ -291,14 +288,16 @@ public abstract class ClientBaseDataSource implements Serializable, Referenceabl",
"removed": [
" // if password is available, then a security mechanism is picked in",
" // following order if support is available.",
" // 1. EUSRIDPWD",
" // 2. USRIDPWD"
]
}
]
}
] |
derby-DERBY-1757-86cae7bf
|
DERBY-1817: Race condition in network server's thread pool
Instead of always putting new sessions in the run queue when there are
free threads, the network server now compares the number of free
threads and the size of the run queue. This is done to prevent the run
queue from growing to a size greater than the number of free
threads. Also, the server now synchronizes on runQueue until the
session has been added to the queue. This is to prevent two threads
from deciding that there are enough free threads and adding the
session to the run queue, when there in fact only were enough free
threads for one of them. With this patch, I am not able to reproduce
DERBY-1757 on platforms where the failure was easily reproduced
before.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@441802 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/ClientThread.java",
"hunks": [
{
"added": [],
"header": "@@ -47,7 +47,6 @@ final class ClientThread extends Thread {",
"removed": [
"\t\t\tSession clientSession = null;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"\t * Add a session - for use by <code>ClientThread</code>. Put the session",
"\t * into the session table and the run queue. Start a new",
"\t * <code>DRDAConnThread</code> if there are more sessions waiting than",
"\t * there are free threads, and the maximum number of threads is not",
"\t * exceeded.",
"\t *",
"\t * @param connectionNumber number of connection",
"\t * @param clientSocket the socket to read from and write to",
"\t */",
"\tvoid addSession(int connectionNumber, Socket clientSocket)",
"\t\t\tthrows IOException {",
"",
"\t\t// Note that we always re-fetch the tracing configuration because it",
"\t\t// may have changed (there are administrative commands which allow",
"\t\t// dynamic tracing reconfiguration).",
"\t\tSession session = new Session(connectionNumber, clientSocket,",
"\t\t\t\t\t\t\t\t\t getTraceDirectory(), getTraceAll());",
"",
"\t\tsessionTable.put(new Integer(connectionNumber), session);",
"",
"\t\t// Synchronize on runQueue to prevent other threads from updating",
"\t\t// runQueue or freeThreads. Hold the monitor's lock until a thread is",
"\t\t// started or the session is added to the queue. If we release the lock",
"\t\t// earlier, we might start too few threads (DERBY-1817).",
"\t\tsynchronized (runQueue) {",
"\t\t\tDRDAConnThread thread = null;",
"",
"\t\t\t// try to start a new thread if we don't have enough free threads",
"\t\t\t// to service all sessions in the run queue",
"\t\t\tif (freeThreads <= runQueue.size()) {",
"\t\t\t\t// Synchronize on threadsSync to ensure that the value of",
"\t\t\t\t// maxThreads doesn't change until the new thread is added to",
"\t\t\t\t// threadList.",
"\t\t\t\tsynchronized (threadsSync) {",
"\t\t\t\t\t// only start a new thread if we have no maximum number of",
"\t\t\t\t\t// threads or the maximum number of threads is not exceeded",
"\t\t\t\t\tif ((maxThreads == 0) ||",
"\t\t\t\t\t\t\t(threadList.size() < maxThreads)) {",
"\t\t\t\t\t\tthread = new DRDAConnThread(session, this,",
"\t\t\t\t\t\t\t\t\t\t\t\t\tgetTimeSlice(),",
"\t\t\t\t\t\t\t\t\t\t\t\t\tgetLogConnections());",
"\t\t\t\t\t\tthreadList.add(thread);",
"\t\t\t\t\t\tthread.start();",
"\t\t\t\t\t}",
"\t\t\t\t}",
"\t\t\t}",
"",
"\t\t\t// add the session to the run queue if we didn't start a new thread",
"\t\t\tif (thread == null) {",
"\t\t\t\trunQueueAdd(session);",
"\t\t\t}",
"\t\t}"
],
"header": "@@ -3372,14 +3372,58 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t * Add To Session Table - for use by ClientThread, add a new Session to the sessionTable.",
"\t *",
"\t * @param i\tConnection number to register",
"\t * @param s\tSession to add to the sessionTable",
"\t */",
"\tprotected void addToSessionTable(Integer i, Session s)",
"\t{",
"\t\tsessionTable.put(i, s);"
]
}
]
}
] |
derby-DERBY-1758-3098ab04
|
DERBY-1758: Enable xmlSuite to run as part of derbyall for qualified JVMs
This patch was contributed by A B ([email protected])
This patch adds two JUnit tests to lang/_Suite.java. The first test,
XMLTypeAndOpsTest.java, is meant to be a JUnit equivalent to the current
lang/xml_general.sql test. The second test, XMLMissingClassesTest,
tests the behavior of the SQL/XML operators when the required JAXP
or Xalan classes are not in the classpath.
The XML classes can be provided in any of a number of ways:
1) bundled into the JVM
2) installed as endorsed libraries
3) specified in the classpath
Hand-testing was performed to ensure that the new JUnit tests perform
correctly in these various configurations.
If the tests are run in an environment which does not support the XML
features, the tests quietly do nothing.
The patch, d1758_newJUnitTests_v2.patch, also adds a new utility method
and some associated state to JDBC.java for checking two things:
1) that the classpath has JAXP and Xalan classes, and
2) if the classpath has Xalan, check that the version of Xalan meets
the minimum requirement for use of Derby SQL/XML operators.
These methods/flags are then used to determine when to run the new
XML JUnit tests.
NOTE: After this patch has been reviewed/updated and finally committed
I will post a separate patch to remove the old lang/xml_general.sql test
and the corresponding master files. I will then continue addressing the
rest of the tasks for this issue (esp. xmlBinding.java) in incremental fashion.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@468503 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/JDBC.java",
"hunks": [
{
"added": [
"import java.io.PrintWriter;",
"import java.io.ByteArrayInputStream;",
"import java.io.ByteArrayOutputStream;",
"",
"import java.lang.reflect.Method;",
"",
"import java.util.StringTokenizer;",
"import java.util.Properties;",
""
],
"header": "@@ -25,6 +25,15 @@ import java.util.Iterator;",
"removed": []
},
{
"added": [
" /**",
" * Minimum version of Xalan required to run XML tests under",
" * Security Manager. In this case, we're saying that the",
" * minimum version is Xalan 2.5.0 (because there's a bug",
" * in earlier versions that causes problems with security",
" * manager).",
" */",
" private static int [] MIN_XALAN_VERSION = new int [] { 2, 5, 0 };",
""
],
"header": "@@ -33,6 +42,15 @@ import junit.framework.Assert;",
"removed": []
},
{
"added": [
" /**",
" * Determine whether or not the classpath with which we're",
" * running has the JAXP API classes required for use of",
" * the Derby XML operators.",
" */",
" private static final boolean HAVE_JAXP",
" = haveClass(\"org.w3c.dom.Document\");",
"",
"",
" /**",
" * Determine whether or not the classpath with which we're",
" * running has a version of Xalan in it. Xalan is required",
" * for use of the Derby XML operators. In particular we",
" * check for:",
" *",
" * 1. Xalan classes (version doesn't matter here)",
" * 2. The Xalan \"EnvironmentCheck\" class, which is included",
" * as part of Xalan. This allows us to check the specific",
" * version of Xalan in use so that we can determine if",
" * if we satisfy the minimum requirement.",
" */",
" private static final boolean HAVE_XALAN =",
" haveClass(\"org.apache.xpath.XPath\") &&",
" haveClass(\"org.apache.xalan.xslt.EnvironmentCheck\");",
"",
" /**",
" * Determine if we have the minimum required version of Xalan",
" * for successful use of the XML operators.",
" */",
" private static final boolean HAVE_MIN_XALAN",
" = HAVE_XALAN && checkXalanVersion();",
""
],
"header": "@@ -53,6 +71,38 @@ public class JDBC {",
"removed": []
},
{
"added": [
""
],
"header": "@@ -100,6 +150,7 @@ public class JDBC {",
"removed": []
},
{
"added": [
"\t/**",
" \t * <p>",
"\t * Return true if the classpath contains JAXP and",
"\t * Xalan classes (this method doesn't care about",
"\t * the particular version of Xalan).",
"\t * </p>",
"\t */",
"\tpublic static boolean classpathHasXalanAndJAXP()",
"\t{",
"\t\treturn HAVE_JAXP && HAVE_XALAN;",
"\t}",
"",
"\t/**",
"\t * <p>",
"\t * Return true if the classpath meets all of the requirements",
"\t * for use of the SQL/XML operators. This means that all",
"\t * required classes exist in the classpath AND the version",
"\t * of Xalan that we found is at least MIN_XALAN_VERSION.",
"\t * </p>",
"\t */",
"\tpublic static boolean classpathMeetsXMLReqs()",
"\t{",
"\t\treturn HAVE_JAXP && HAVE_MIN_XALAN;",
"\t}",
""
],
"header": "@@ -112,6 +163,31 @@ public class JDBC {",
"removed": []
},
{
"added": [
"",
" /**",
" * Determine whether or not the classpath with which we're",
" * running has a version of Xalan that meets the minimum",
" * Xalan version requirement. We do that by using a Java",
" * utility that ships with Xalan--namely, \"EnvironmentCheck\"--",
" * and by parsing the info gathered by that method to find",
" * the Xalan version. We use reflection when doing this",
" * so that this file will compile/execute even if XML classes",
" * are missing.",
" *",
" * Assumption is that we only get to this method if we already",
" * know that there *is* a version of Xalan in the classpath",
" * and that version includes the \"EnvironmentCheck\" class.",
" *",
" * Note that this method returns false if the call to Xalan's",
" * EnvironmentCheck.checkEnvironment() returns false for any",
" * reason. As a specific example, that method will always",
" * return false when running with ibm131 because it cannot",
" * find the required methods on the SAX 2 classes (apparently",
" * the classes in ibm131 jdk don't have all of the methods",
" * required by Xalan). Thus this method will always return",
" * \"false\" for ibm131.",
" */",
" private static boolean checkXalanVersion()",
" {",
" boolean haveMinXalanVersion = false;",
" try {",
"",
" // These io objects allow us to retrieve information generated",
" // by the call to EnvironmenCheck.checkEnvironment()",
" ByteArrayOutputStream bos = new ByteArrayOutputStream();",
" PrintWriter pW = new PrintWriter(bos);",
"",
" // Call the method using reflection.",
"",
" Class cl = Class.forName(\"org.apache.xalan.xslt.EnvironmentCheck\");",
" Method meth = cl.getMethod(\"checkEnvironment\",",
" new Class[] { PrintWriter.class });",
"",
" Boolean boolObj = (Boolean)meth.invoke(",
" cl.newInstance(), new Object [] { pW });",
"",
" pW.flush();",
" bos.flush();",
"",
" cl = null;",
" meth = null;",
" pW = null;",
"",
" /* At this point 'bos' holds a list of properties with",
" * a bunch of environment information. The specific",
" * property we're looking for is \"version.xalan2_2\",",
" * so get that property, parse the value, and see",
" * if the version is at least the minimum required.",
" */",
" if (boolObj.booleanValue())",
" {",
" // Load the properties gathered from checkEnvironment().",
" Properties props = new Properties();",
" props.load(new ByteArrayInputStream(bos.toByteArray()));",
" bos.close();",
"",
" // Now pull out the one we need.",
" String ver = props.getProperty(\"version.xalan2_2\");",
" haveMinXalanVersion = (ver != null);",
" if (haveMinXalanVersion)",
" {",
" /* We found the property, so parse out the necessary",
" * piece. The value is of the form:",
" *",
" * <productName> Major.minor.x",
" *",
" * Ex:",
" *",
" * version.xalan2_2=Xalan Java 2.5.1 ",
" * version.xalan2_2=XSLT4J Java 2.6.6",
" */",
" int i = 0;",
" StringTokenizer tok = new StringTokenizer(ver, \". \");",
" while (tok.hasMoreTokens())",
" {",
" String str = tok.nextToken().trim();",
" if (Character.isDigit(str.charAt(0)))",
" {",
" int val = Integer.valueOf(str).intValue();",
" if (val < MIN_XALAN_VERSION[i])",
" {",
" haveMinXalanVersion = false;",
" break;",
" }",
" i++;",
" }",
"",
" /* If we've checked all parts of the min version,",
" * then we assume we're okay. Ex. \"2.5.0.2\"",
" * is considered greater than \"2.5.0\".",
" */",
" if (i >= MIN_XALAN_VERSION.length)",
" break;",
" }",
"",
" /* If the value had fewer parts than the",
" * mininum version, then it doesn't meet",
" * the requirement. Ex. \"2.5\" is considered",
" * to be a lower version than \"2.5.0\".",
" */",
" if (i < MIN_XALAN_VERSION.length)",
" haveMinXalanVersion = false;",
" }",
" }",
"",
" /* Else the call to checkEnvironment() returned \"false\",",
" * which means it couldn't find all of the classes/methods",
" * required for Xalan to function. So in that case we'll",
" * fall through and just return false, as well.",
" */",
"",
" } catch (Throwable t) {",
"",
" System.out.println(\"Unexpected exception while \" +",
" \"trying to find Xalan version:\");",
" t.printStackTrace(System.err);",
"",
" // If something went wrong, assume we don't have the",
" // necessary classes.",
" haveMinXalanVersion = false;",
"",
" }",
"",
" return haveMinXalanVersion;",
" }",
""
],
"header": "@@ -640,4 +716,137 @@ public class JDBC {",
"removed": []
}
]
}
] |
derby-DERBY-1758-378aa34e
|
DERBY-2131: Use a privileged block when calling out to the JAXP parser
so that users running with a security manager can insert XML values
that reference external DTDs without encountering security exceptions.
This patch does not include any tests; however, relevant test cases
will be enabled as part of DERBY-1758.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@481117 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/SqlXmlUtil.java",
"hunks": [
{
"added": [
" protected String serializeToString(final String xmlAsText)",
" final ArrayList aList = new ArrayList();",
"",
" /* The call to dBuilder.parse() is a call to an external",
" * (w.r.t. to Derby) JAXP parser. If the received XML",
" * text references an external DTD, then the JAXP parser",
" * will try to read that external DTD. Thus we wrap the",
" * call to parse inside a privileged action to make sure",
" * that the JAXP parser has the required permissions for",
" * reading the DTD file.",
" */",
" java.security.AccessController.doPrivileged(",
" new java.security.PrivilegedExceptionAction()",
" {",
" public Object run() throws Exception",
" {",
" aList.add(dBuilder.parse(",
" new InputSource(new StringReader(xmlAsText))));",
" return null;",
" }",
" });"
],
"header": "@@ -307,12 +307,29 @@ public class SqlXmlUtil implements Formatable",
"removed": [
" protected String serializeToString(String xmlAsText)",
" ArrayList aList = new ArrayList();",
" aList.add(dBuilder.parse(",
" new InputSource(new StringReader(xmlAsText))));"
]
}
]
}
] |
derby-DERBY-1758-b73c2a37
|
DERBY-1758: Enable xmlSuite to run as part of derbyall for qualified JVMs
This patch was contributed by A B ([email protected])
I'm attaching another patch, d1758_followup_v1.patch, that moves the XML
utility methods out of junit.JDBC and into a new class, junit.XML, per Dan's
suggestion (thanks Dan).
Note that I changed the "haveClass()" method in JDBC.java from private to
protected so that it can be called from the junit.XML class. That was the
easiest thing to do.
Since checking the classpath is not a JDBC-specific operation, the other
option is to move "haveClass()" to some other class in the junit package. If
anyone indicates a preference for doing so and also indicates the class to
which the method should be moved, I can do it this way. Otherwise I'll just
leave it as it is (i.e. keep it in JDBC.java and make it protected).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@468605 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/JDBC.java",
"hunks": [
{
"added": [],
"header": "@@ -25,15 +25,6 @@ import java.util.Iterator;",
"removed": [
"import java.io.PrintWriter;",
"import java.io.ByteArrayInputStream;",
"import java.io.ByteArrayOutputStream;",
"",
"import java.lang.reflect.Method;",
"",
"import java.util.StringTokenizer;",
"import java.util.Properties;",
""
]
},
{
"added": [],
"header": "@@ -42,15 +33,6 @@ import junit.framework.Assert;",
"removed": [
" /**",
" * Minimum version of Xalan required to run XML tests under",
" * Security Manager. In this case, we're saying that the",
" * minimum version is Xalan 2.5.0 (because there's a bug",
" * in earlier versions that causes problems with security",
" * manager).",
" */",
" private static int [] MIN_XALAN_VERSION = new int [] { 2, 5, 0 };",
""
]
},
{
"added": [
" protected static boolean haveClass(String className)"
],
"header": "@@ -70,45 +52,13 @@ public class JDBC {",
"removed": [
" ",
" /**",
" * Determine whether or not the classpath with which we're",
" * running has the JAXP API classes required for use of",
" * the Derby XML operators.",
" */",
" private static final boolean HAVE_JAXP",
" = haveClass(\"org.w3c.dom.Document\");",
"",
"",
" /**",
" * Determine whether or not the classpath with which we're",
" * running has a version of Xalan in it. Xalan is required",
" * for use of the Derby XML operators. In particular we",
" * check for:",
" *",
" * 1. Xalan classes (version doesn't matter here)",
" * 2. The Xalan \"EnvironmentCheck\" class, which is included",
" * as part of Xalan. This allows us to check the specific",
" * version of Xalan in use so that we can determine if",
" * if we satisfy the minimum requirement.",
" */",
" private static final boolean HAVE_XALAN =",
" haveClass(\"org.apache.xpath.XPath\") &&",
" haveClass(\"org.apache.xalan.xslt.EnvironmentCheck\");",
"",
" /**",
" * Determine if we have the minimum required version of Xalan",
" * for successful use of the XML operators.",
" */",
" private static final boolean HAVE_MIN_XALAN",
" = HAVE_XALAN && checkXalanVersion();",
" private static boolean haveClass(String className)"
]
},
{
"added": [],
"header": "@@ -163,31 +113,6 @@ public class JDBC {",
"removed": [
"\t/**",
" \t * <p>",
"\t * Return true if the classpath contains JAXP and",
"\t * Xalan classes (this method doesn't care about",
"\t * the particular version of Xalan).",
"\t * </p>",
"\t */",
"\tpublic static boolean classpathHasXalanAndJAXP()",
"\t{",
"\t\treturn HAVE_JAXP && HAVE_XALAN;",
"\t}",
"",
"\t/**",
"\t * <p>",
"\t * Return true if the classpath meets all of the requirements",
"\t * for use of the SQL/XML operators. This means that all",
"\t * required classes exist in the classpath AND the version",
"\t * of Xalan that we found is at least MIN_XALAN_VERSION.",
"\t * </p>",
"\t */",
"\tpublic static boolean classpathMeetsXMLReqs()",
"\t{",
"\t\treturn HAVE_JAXP && HAVE_MIN_XALAN;",
"\t}",
""
]
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/XML.java",
"hunks": [
{
"added": [
"/*",
" *",
" * Derby - Class org.apache.derbyTesting.functionTests.util.XML",
" *",
" * Licensed to the Apache Software Foundation (ASF) under one or more",
" * contributor license agreements. See the NOTICE file distributed with",
" * this work for additional information regarding copyright ownership.",
" * The ASF licenses this file to You under the Apache License, Version 2.0",
" * (the \"License\"); you may not use this file except in compliance with",
" * the License. You may obtain a copy of the License at",
" *",
" * http://www.apache.org/licenses/LICENSE-2.0",
" *",
" * Unless required by applicable law or agreed to in writing, ",
" * software distributed under the License is distributed on an ",
" * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, ",
" * either express or implied. See the License for the specific ",
" * language governing permissions and limitations under the License.",
" */",
"package org.apache.derbyTesting.junit;",
"",
"import java.io.PrintWriter;",
"import java.io.ByteArrayInputStream;",
"import java.io.ByteArrayOutputStream;",
"",
"import java.lang.reflect.Method;",
"",
"import java.util.StringTokenizer;",
"import java.util.Properties;",
"",
"import junit.framework.Assert;",
"",
"/**",
" * XML utility methods for the JUnit tests.",
" *",
" */",
"public class XML {",
" ",
" /**",
" * Minimum version of Xalan required to run XML tests under",
" * Security Manager. In this case, we're saying that the",
" * minimum version is Xalan 2.5.0 (because there's a bug",
" * in earlier versions that causes problems with security",
" * manager).",
" */",
" private static int [] MIN_XALAN_VERSION = new int [] { 2, 5, 0 };",
"",
" /**",
" * Determine whether or not the classpath with which we're",
" * running has the JAXP API classes required for use of",
" * the Derby XML operators.",
" */",
" private static final boolean HAVE_JAXP =",
" JDBC.haveClass(\"org.w3c.dom.Document\");",
"",
" /**",
" * Determine whether or not the classpath with which we're",
" * running has a version of Xalan in it. Xalan is required",
" * for use of the Derby XML operators. In particular we",
" * check for:",
" *",
" * 1. Xalan classes (version doesn't matter here)",
" * 2. The Xalan \"EnvironmentCheck\" class, which is included",
" * as part of Xalan. This allows us to check the specific",
" * version of Xalan in use so that we can determine if",
" * if we satisfy the minimum requirement.",
" */",
" private static final boolean HAVE_XALAN =",
" JDBC.haveClass(\"org.apache.xpath.XPath\") &&",
" JDBC.haveClass(\"org.apache.xalan.xslt.EnvironmentCheck\");",
"",
" /**",
" * Determine if we have the minimum required version of Xalan",
" * for successful use of the XML operators.",
" */",
" private static final boolean HAVE_MIN_XALAN",
" = HAVE_XALAN && checkXalanVersion();",
"",
" /**",
" * Return true if the classpath contains JAXP and",
" * Xalan classes (this method doesn't care about",
" * the particular version of Xalan).",
" */",
" public static boolean classpathHasXalanAndJAXP()",
" {",
" return HAVE_JAXP && HAVE_XALAN;",
" }",
"",
" /**",
" * Return true if the classpath meets all of the requirements",
" * for use of the SQL/XML operators. This means that all",
" * required classes exist in the classpath AND the version",
" * of Xalan that we found is at least MIN_XALAN_VERSION.",
" */",
" public static boolean classpathMeetsXMLReqs()",
" {",
" return HAVE_JAXP && HAVE_MIN_XALAN;",
" }",
"",
" /**",
" * Determine whether or not the classpath with which we're",
" * running has a version of Xalan that meets the minimum",
" * Xalan version requirement. We do that by using a Java",
" * utility that ships with Xalan--namely, \"EnvironmentCheck\"--",
" * and by parsing the info gathered by that method to find",
" * the Xalan version. We use reflection when doing this",
" * so that this file will compile/execute even if XML classes",
" * are missing.",
" *",
" * Assumption is that we only get to this method if we already",
" * know that there *is* a version of Xalan in the classpath",
" * and that version includes the \"EnvironmentCheck\" class.",
" *",
" * Note that this method returns false if the call to Xalan's",
" * EnvironmentCheck.checkEnvironment() returns false for any",
" * reason. As a specific example, that method will always",
" * return false when running with ibm131 because it cannot",
" * find the required methods on the SAX 2 classes (apparently",
" * the classes in ibm131 jdk don't have all of the methods",
" * required by Xalan). Thus this method will always return",
" * \"false\" for ibm131.",
" */",
" private static boolean checkXalanVersion()",
" {",
" boolean haveMinXalanVersion = false;",
" try {",
"",
" // These io objects allow us to retrieve information generated",
" // by the call to EnvironmenCheck.checkEnvironment()",
" ByteArrayOutputStream bos = new ByteArrayOutputStream();",
" PrintWriter pW = new PrintWriter(bos);",
"",
" // Call the method using reflection.",
"",
" Class cl = Class.forName(\"org.apache.xalan.xslt.EnvironmentCheck\");",
" Method meth = cl.getMethod(\"checkEnvironment\",",
" new Class[] { PrintWriter.class });",
"",
" Boolean boolObj = (Boolean)meth.invoke(",
" cl.newInstance(), new Object [] { pW });",
"",
" pW.flush();",
" bos.flush();",
"",
" cl = null;",
" meth = null;",
" pW = null;",
"",
" /* At this point 'bos' holds a list of properties with",
" * a bunch of environment information. The specific",
" * property we're looking for is \"version.xalan2_2\",",
" * so get that property, parse the value, and see",
" * if the version is at least the minimum required.",
" */",
" if (boolObj.booleanValue())",
" {",
" // Load the properties gathered from checkEnvironment().",
" Properties props = new Properties();",
" props.load(new ByteArrayInputStream(bos.toByteArray()));",
" bos.close();",
"",
" // Now pull out the one we need.",
" String ver = props.getProperty(\"version.xalan2_2\");",
" haveMinXalanVersion = (ver != null);",
" if (haveMinXalanVersion)",
" {",
" /* We found the property, so parse out the necessary",
" * piece. The value is of the form:",
" *",
" * <productName> Major.minor.x",
" *",
" * Ex:",
" *",
" * version.xalan2_2=Xalan Java 2.5.1 ",
" * version.xalan2_2=XSLT4J Java 2.6.6",
" */",
" int i = 0;",
" StringTokenizer tok = new StringTokenizer(ver, \". \");",
" while (tok.hasMoreTokens())",
" {",
" String str = tok.nextToken().trim();",
" if (Character.isDigit(str.charAt(0)))",
" {",
" int val = Integer.valueOf(str).intValue();",
" if (val < MIN_XALAN_VERSION[i])",
" {",
" haveMinXalanVersion = false;",
" break;",
" }",
" i++;",
" }",
"",
" /* If we've checked all parts of the min version,",
" * then we assume we're okay. Ex. \"2.5.0.2\"",
" * is considered greater than \"2.5.0\".",
" */",
" if (i >= MIN_XALAN_VERSION.length)",
" break;",
" }",
"",
" /* If the value had fewer parts than the",
" * mininum version, then it doesn't meet",
" * the requirement. Ex. \"2.5\" is considered",
" * to be a lower version than \"2.5.0\".",
" */",
" if (i < MIN_XALAN_VERSION.length)",
" haveMinXalanVersion = false;",
" }",
" }",
"",
" /* Else the call to checkEnvironment() returned \"false\",",
" * which means it couldn't find all of the classes/methods",
" * required for Xalan to function. So in that case we'll",
" * fall through and just return false, as well.",
" */",
"",
" } catch (Throwable t) {",
"",
" System.out.println(\"Unexpected exception while \" +",
" \"trying to find Xalan version:\");",
" t.printStackTrace(System.err);",
"",
" // If something went wrong, assume we don't have the",
" // necessary classes.",
" haveMinXalanVersion = false;",
"",
" }",
"",
" return haveMinXalanVersion;",
" }",
"}"
],
"header": "@@ -0,0 +1,231 @@",
"removed": []
}
]
}
] |
derby-DERBY-1758-cd4ba4a7
|
DERBY-1758 (partial): Adds a new JUnit test to replace the old
lang/xmlBinding.java test. The patch does the following:
- Adds XML file insertion utility methods to junit/XML.java
- Creates a new JUnit test called lang/XMLBindingTest.java that
uses the new utility methods to test various binding scenarios
with Derby's SQL/XML operators.
- Overloads the TestConfiguration.defaultSuite() method with a boolean
signature to allow optional addition of CleanDatabaseSetup.
- Updates lang/XMLTypeAndOpsTest to use the new overloaded defaultSuite()
method.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@476365 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/XML.java",
"hunks": [
{
"added": [
"import java.io.IOException;",
"import java.io.InputStreamReader;",
"import java.security.PrivilegedActionException;",
"",
"import java.sql.Connection;",
"import java.sql.PreparedStatement;",
"import java.sql.SQLException;"
],
"header": "@@ -19,11 +19,18 @@",
"removed": []
},
{
"added": [],
"header": "@@ -32,7 +39,6 @@ import junit.framework.Assert;",
"removed": [
" *"
]
},
{
"added": [
" /**",
" * The filepath for the directory that holds the XML \"helper\" files",
" * (i.e. the files to insert and their schema documents).",
" */",
" private static final String HELPER_FILE_LOCATION =",
" \"org/apache/derbyTesting/functionTests/tests/lang/xmlTestFiles/\";",
""
],
"header": "@@ -76,6 +82,13 @@ public class XML {",
"removed": []
},
{
"added": [
" /**",
" * Insert the contents of a file into the received column of",
" * the received table using \"setCharacterStream\". Expectation",
" * is that the file is in the directory indicated by ",
" * HELPER_FILE_LOCATION.",
" *",
" * @param conn Connection on which to perform the insert.",
" * @param tableName Table into which we want to insert.",
" * @param colName Column in tableName into which we want to insert.",
" * @param fName Name of the file whose content we want to insert.",
" * @param numRows Number of times we should insert the received",
" * file's content.",
" */",
" public static void insertFile(Connection conn, String tableName,",
" String colName, String fName, int numRows)",
" throws IOException, SQLException, PrivilegedActionException",
" {",
" // First we have to figure out many chars long the file is.",
"",
" fName = HELPER_FILE_LOCATION + fName;",
" java.net.URL xFile = BaseTestCase.getTestResource(fName);",
" Assert.assertNotNull(\"XML input file missing: \" + fName, xFile);",
" ",
" int charCount = 0;",
" char [] cA = new char[1024];",
" InputStreamReader reader =",
" new InputStreamReader(BaseTestCase.openTestResource(xFile));",
"",
" for (int len = reader.read(cA, 0, cA.length); len != -1;",
" charCount += len, len = reader.read(cA, 0, cA.length));",
"",
" reader.close();",
"",
" // Now that we know the number of characters, we can insert",
" // using a stream.",
"",
" PreparedStatement pSt = conn.prepareStatement(",
" \"insert into \" + tableName + \"(\" + colName + \") values \" +",
" \"(xmlparse(document cast (? as clob) preserve whitespace))\");",
"",
" for (int i = 0; i < numRows; i++)",
" {",
" reader = new InputStreamReader(",
" BaseTestCase.openTestResource(xFile));",
"",
" pSt.setCharacterStream(1, reader, charCount);",
" pSt.execute();",
" reader.close();",
" }",
"",
" pSt.close();",
" }",
"",
" /**",
" * Insert an XML document into the received column of the received",
" * test table using setString. This method parallels \"insertFiles\"",
" * above, except that it should be used for documents that require",
" * a Document Type Definition (DTD). In that case the location of",
" * the DTD has to be modified _within_ the document so that it can",
" * be found in the running user directory.",
" *",
" * Expectation is that the file to be inserted is in the directory",
" * indicated by HELPER_FILE_LOCATION and that the DTD file has been",
" * copied to the user's running directory (via use of the util",
" * methods in SupportFilesSetup).",
" *",
" * @param conn Connection on which to perform the insert.",
" * @param tableName Table into which we want to insert.",
" * @param colName Column in tableName into which we want to insert.",
" * @param fName Name of the file whose content we want to insert.",
" * @param dtdName Name of the DTD file that the received file uses.",
" * @param numRows Number of times we should insert the received",
" * file's content.",
" */",
" public static void insertDocWithDTD(Connection conn, String tableName,",
" String colName, String fName, String dtdName, int numRows)",
" throws IOException, SQLException, PrivilegedActionException",
" {",
" // Read the file into memory so we can update it.",
" fName = HELPER_FILE_LOCATION + fName;",
" java.net.URL xFile = BaseTestCase.getTestResource(fName);",
" Assert.assertNotNull(\"XML input file missing: \" + fName, xFile);",
"",
" int charCount = 0;",
" char [] cA = new char[1024];",
" StringBuffer sBuf = new StringBuffer();",
" InputStreamReader reader =",
" new InputStreamReader(BaseTestCase.openTestResource(xFile));",
"",
" for (int len = reader.read(cA, 0, cA.length); len != -1;",
" charCount += len, len = reader.read(cA, 0, cA.length))",
" {",
" sBuf.append(cA, 0, len);",
" }",
"",
" reader.close();",
"",
" // Now replace the DTD location.",
"",
" java.net.URL dtdURL = SupportFilesSetup.getReadOnlyURL(dtdName);",
" Assert.assertNotNull(\"DTD file missing: \" + dtdName, dtdURL);",
"",
" String docAsString = sBuf.toString();",
" int pos = docAsString.indexOf(dtdName);",
" if (pos != -1)",
" sBuf.replace(pos, pos+dtdName.length(), dtdURL.toExternalForm());",
"",
" // Now (finally) do the insert using the in-memory document with",
" // the correct DTD location.",
" docAsString = sBuf.toString();",
" PreparedStatement pSt = conn.prepareStatement(",
" \"insert into \" + tableName + \"(\" + colName + \") values \" +",
" \"(xmlparse(document cast (? as clob) preserve whitespace))\");",
"",
" for (int i = 0; i < numRows; i++)",
" {",
" pSt.setString(1, docAsString);",
" pSt.execute();",
" }",
"",
" pSt.close();",
" }",
""
],
"header": "@@ -97,6 +110,129 @@ public class XML {",
"removed": []
}
]
}
] |
derby-DERBY-1758-de7372b8
|
DERBY-1758 (partial):
1. Updates XMLBindingTest to ignore the Windows line-ending character
("\r") when counting characters as part of serialization.
2. Updates XMLBindingTest to run with NO security manager for now.
This works toward the "progress not perfection" goal of incremental
development. Once the questions surrounding the security policy for
JAXP have been answered the test can be updated to run with the security
manager.
3. Creates a new JUnit suite, suites/XMLSuite.java, to run all of the
XML JUnit tests, and enables that suite to run as part of
lang/_Suite.java, which in turn means it is executed as part
suites.All.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@478336 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/SecurityManagerSetup.java",
"hunks": [
{
"added": [
" /**",
" * Same as noSecurityManager() above but takes a TestSetup",
" * instead of a BaseTestCase.",
" */",
" public static Test noSecurityManager(TestSetup tSetup)",
" {",
"\t\tif (externalSecurityManagerInstalled)",
"\t\t\treturn new TestSuite();",
"\t\treturn new SecurityManagerSetup(tSetup, \"<NONE>\");",
" }",
""
],
"header": "@@ -83,6 +83,17 @@ public final class SecurityManagerSetup extends TestSetup {",
"removed": []
}
]
}
] |
derby-DERBY-1758-f7abbf44
|
DERBY-1758 (partial): Enable the lang/XMLBindingTest to run under a security
manager. Changes include all of the following:
- Updates lang/XMLBindingTest.java so that it will run under the default
testing security manager (i.e. removed the "noSecurityManager()" wrapper).
- Adds a new property, derbyTesting.jaxpjar, to the default testing policy
file. This property holds the location of the JAXP jar picked up from the
classpath _if_ that jar is external to the JVM. If the jar is either embedded
within, or "endorsed" by, the JVM then this property is unused.
The JAXP jar is then given permission to read the "extin" testing
directory, which is the directory into which the DTD required by XMLBindingTest
is copied (and thus JAXP has permission to read the DTD file).
- Adds a new static utility method, "getJAXPParserLocation()", to the
junit/XML.java file. This method instantiates a JAXP object and then uses
the implementation-specific class name to try to find out where the JAXP
jar is located.
- Modifies derbyTesing/junit/build.xml so that junit/XML.java will only
build with 1.4 JVMs and higher. This is required because junit/XML.java
now references a JAXP class that is not defined in 1.3.
- Updates the "getURL()" method of junit/SecurityManagerSetup.java to account
for situations where a class "code source" is null. Also updates the
"determineClasspath()" method of that class to set the derbyTesting.jaxpjar
property as appropriate.
- And finally, moves the build order of the derbyTesting/junit directory
so that it is built *before* the derbyTesting/harness directory.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@482433 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/SecurityManagerSetup.java",
"hunks": [
{
"added": [
" /* When inserting XML values that use external DTD's, the JAXP",
" * parser needs permission to read the DTD files. So here we set",
" * a property to hold the location of the JAXP implementation",
" * jar file. We can then grant the JAXP impl the permissions",
" * needed for reading the DTD files.",
" */",
" String jaxp = XML.getJAXPParserLocation();",
" if (jaxp != null)",
" classPathSet.setProperty(\"derbyTesting.jaxpjar\", jaxp);",
""
],
"header": "@@ -252,6 +252,16 @@ public final class SecurityManagerSetup extends TestSetup {",
"removed": []
}
]
},
{
"file": "java/testing/org/apache/derbyTesting/junit/XML.java",
"hunks": [
{
"added": [
"import java.net.URL;"
],
"header": "@@ -26,6 +26,7 @@ import java.io.ByteArrayOutputStream;",
"removed": []
},
{
"added": [
"/* The following import is for a JDBC 3.0 JAXP class, which means that",
" * this file can only be built with 1.4 or later (see build.xml in",
" * this directory). This means that 1.3 JVMs will not be able to",
" * instantiate this class--but since 1.3 is deprecated as of 10.3,",
" * we do not worry about that here.",
" */",
"import javax.xml.parsers.DocumentBuilderFactory;",
""
],
"header": "@@ -35,6 +36,14 @@ import java.sql.SQLException;",
"removed": []
},
{
"added": [
" /**",
" * String form of the URL for the jar file in the user's classpath",
" * that holds the JAXP implementation in use. If the implementation",
" * is embedded within, or endorsed by, the JVM, then we will set this",
" * field to be an empty string.",
" */",
" private static String jaxpURLString = null;",
""
],
"header": "@@ -89,6 +98,14 @@ public class XML {",
"removed": []
}
]
}
] |
derby-DERBY-1759-0c5a8eb5
|
DERBY-1759: XMLSERIALIZE doesn't follow spec when serializing sequence
This patch was contributed by Army Brown ([email protected])
The patch does the following:
1. Adds logic to SqlXmlUtil.serializeToString() to perform the
steps of "sequence normalization" as required by XML serialization
rules. This includes throwing an error if the user attempts to
explicitly serialize a sequence that contains one or more top-level
attribute nodes.
2. In order to ensure that the serialization error is only thrown
when the user performs an explicit XMLSERIALIZE, I added a
new field, "containsTopLevelAttr", to the XML class. This field
indicates whether or not the XML value corresponds to a sequence
with a top-level attribute node. If the user calls XMLSERIALIZE
on an XMLDataValue for which containsTopLevelAttr is true,
then we'll throw the serialization error 2200W as dictated by
SQL/XML.
3. Adds appropriate test cases to lang/xml_general.sql to verify
the fix.
4. Since Xalan doesn't provide a built-in way to retrieve a sequence
of attribue values (as opposed to attribute nodes), I also included
in lang/xml_general.sql a (rather ugly) way to accomplish that
task within the serialization restrictions of SQL/XML.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@441740 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/SqlXmlUtil.java",
"hunks": [
{
"added": [
"import org.w3c.dom.Attr;"
],
"header": "@@ -32,6 +32,7 @@ import java.io.StringReader;",
"removed": []
},
{
"added": [
"",
" /* The second argument in the following call is for",
" * catching cases where we have a top-level (parentless)",
" * attribute node--but since we just created the list",
" * with a single Document node, we already we know we",
" * don't have a top-level attribute node in the list,",
" * so we don't have to worry. Hence the \"null\" here.",
" */",
" return serializeToString(aList, null);",
" * rules, which ultimately point to XML serialization rules as",
" * defined by w3c. As part of that serialization process we have",
" * to first \"normalize\" the sequence. We do that by iterating through",
" * the list and performing the steps for \"sequence normalization\" as",
" * defined here:",
" *",
" * http://www.w3.org/TR/xslt-xquery-serialization/#serdm",
" *",
" * This method primarily focuses on taking the steps for normalization;",
" * for the rest of the serialization work, we just make calls on the",
" * DOMSerializer class provided by Xalan.",
" * @param xmlVal XMLDataValue into which the serialized string",
" * returned by this method is ultimately going to be stored.",
" * This is used for keeping track of XML values that represent",
" * sequences having top-level (parentless) attribute nodes.",
" * @return Single string holding the serialized version of the",
" * normalized sequence created from the items in the received",
" * list.",
" protected String serializeToString(ArrayList items,",
" XMLDataValue xmlVal) throws java.io.IOException"
],
"header": "@@ -297,23 +298,43 @@ public class SqlXmlUtil",
"removed": [
" return serializeToString(aList);",
" * rules. We do that by going through each item in the array",
" * list and either serializing it (if it's a Node) or else",
" * just echoing the value to the serializer (if it's a Text",
" * node or an atomic value).",
" * @return Single string holding the concatenation of the serialized",
" * form of all items in the list",
" protected String serializeToString(ArrayList items)",
" throws java.io.IOException"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/XML.java",
"hunks": [
{
"added": [
" /*",
" * Whether or not this XML value corresponds to a sequence",
" * that has one or more top-level (\"parentless\") attribute",
" * nodes. If so then we have to throw an error if the user",
" * attempts to serialize this value, per XML serialization",
" * rules.",
" */",
" private boolean containsTopLevelAttr;",
""
],
"header": "@@ -130,6 +130,15 @@ public class XML",
"removed": []
},
{
"added": [
" containsTopLevelAttr = false;",
" *",
" * @param xmlType Qualified XML type for \"val\"",
" * @param seqWithAttr Whether or not \"val\" corresponds to",
" * sequence with one or more top-level attribute nodes.",
" * @return A new instance of XML whose fields are clones",
" * of the values received.",
" private XML(SQLChar val, int xmlType, boolean seqWithAttr)",
" if (seqWithAttr)",
" markAsHavingTopLevelAttr();"
],
"header": "@@ -137,32 +146,26 @@ public class XML",
"removed": [
" * Takes a SQLChar and clones it.",
" * @param val A SQLChar instance to clone and use for",
" * this XML value.",
" */",
" private XML(SQLChar val)",
" {",
" xmlStringValue = (val == null ? null : (SQLChar)val.getClone());",
" xType = -1;",
" }",
"",
" /**",
" * Private constructor used for the getClone() method.",
" * Takes a SQLChar and clones it and also takes a",
" * qualified XML type and stores that as this XML",
" * object's qualified type.",
" private XML(SQLChar val, int xmlType)"
]
},
{
"added": [
" return new XML(xmlStringValue, getXType(), hasTopLevelAttr());"
],
"header": "@@ -174,7 +177,7 @@ public class XML",
"removed": [
" return new XML(xmlStringValue, getXType());"
]
},
{
"added": [
" {",
" if (((XMLDataValue)theValue).hasTopLevelAttr())",
" markAsHavingTopLevelAttr();",
" }"
],
"header": "@@ -284,7 +287,11 @@ public class XML",
"removed": []
},
{
"added": [
" /* XML serialization rules say that sequence \"normalization\"",
" * must occur before serialization, and normalization dictates",
" * that a serialization error must be thrown if the XML value",
" * is a sequence with a top-level attribute. We normalized",
" * (and serialized) this XML value when it was first created,",
" * and at that time we took note of whether or not there is",
" * a top-level attribute. So throw the error here if needed.",
" * See SqlXmlUtil.serializeToString() for more on sequence",
" * normalization.",
" */",
" if (this.hasTopLevelAttr())",
" {",
" throw StandardException.newException(",
" SQLState.LANG_XQUERY_SERIALIZATION_ERROR);",
" }",
""
],
"header": "@@ -636,6 +643,22 @@ public class XML",
"removed": []
},
{
"added": [
" result = new XML();",
" String strResult = sqlxUtil.serializeToString(itemRefs, result);",
" result.setValue(new SQLChar(strResult));"
],
"header": "@@ -749,11 +772,10 @@ public class XML",
"removed": [
" String strResult = sqlxUtil.serializeToString(itemRefs);",
" result = new XML(new SQLChar(strResult));",
" else",
" result.setValue(new SQLChar(strResult));"
]
},
{
"added": [
"",
" /* If the target type is XML_DOC_ANY then this XML value",
" * holds a single well-formed Document. So we know that",
" * we do NOT have any top-level attribute nodes. Note: if",
" * xtype is SEQUENCE we don't set \"containsTopLevelAttr\"",
" * here; assumption is that the caller of this method will",
" * then set the field as appropriate. Ex. see \"setFrom()\"",
" * in this class.",
" */",
" if (xtype == XML_DOC_ANY)",
" containsTopLevelAttr = false;"
],
"header": "@@ -795,6 +817,17 @@ public class XML",
"removed": []
}
]
}
] |
derby-DERBY-176-38fe427c
|
DERBY-176 Produce a clear error message when a SQL statement exceeds the
limit(s) of the generated Java class. These limits are imposed by the Java
Virtual Machine specification.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@358605 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/reference/SQLState.java",
"hunks": [
{
"added": [
"\tString GENERATED_CLASS_LIMIT_EXCEEDED\t= \"XBCM4.S\";"
],
"header": "@@ -197,6 +197,7 @@ public interface SQLState {",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/BCClass.java",
"hunks": [
{
"added": [
"\t",
"\t/**",
"\t * Simple text indicating any limits execeeded while generating",
"\t * the class file.",
"\t */",
"\tprivate String limitMsg;",
"\t"
],
"header": "@@ -80,6 +80,13 @@ import java.io.IOException;",
"removed": []
},
{
"added": [
"\t\t",
"\t\tif (limitMsg != null)",
"\t\t\tthrow StandardException.newException(",
"\t\t\t\t\tSQLState.GENERATED_CLASS_LIMIT_EXCEEDED, getFullName(), limitMsg);"
],
"header": "@@ -153,6 +160,10 @@ class BCClass extends GClass {",
"removed": []
},
{
"added": [
"\t\tchunk.complete(null, classHold, method, typeWidth, 1);"
],
"header": "@@ -376,7 +387,7 @@ class BCClass extends GClass {",
"removed": [
"\t\tchunk.complete(classHold, method, typeWidth, 1);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/BCMethod.java",
"hunks": [
{
"added": [
"\t\tmyCode.complete(this, modClass, myEntry, maxStack, currentVarNum);"
],
"header": "@@ -205,7 +205,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\tmyCode.complete(modClass, myEntry, maxStack, currentVarNum);"
]
},
{
"added": [
"\t\tType[] entryStack = condition.startElse(this, myCode, copyStack());"
],
"header": "@@ -918,7 +918,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\tType[] entryStack = condition.startElse(myCode, copyStack());"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/CodeChunk.java",
"hunks": [
{
"added": [
"\t * Limits checked here are from these sections of the JVM spec.",
"\t * <UL>",
"\t * <LI> 4.7.3 The Code Attribute",
"\t * <LI> 4.10 Limitations of the Java Virtual Machine ",
"\t * </UL>",
"\tprivate void fixLengths(BCMethod mb, int maxStack, int maxLocals, int codeLength) {",
"\t\tif (mb != null && maxStack > 65535)",
"\t\t\tmb.cb.addLimitExceeded(mb, \"max_stack\", 65535, maxStack);",
"\t\t\t",
"\t\tif (mb != null && maxLocals > 65535)",
"\t\t\tmb.cb.addLimitExceeded(mb, \"max_locals\", 65535, maxLocals);",
"\t\tif (mb != null && codeLength > 65536)",
"\t\t\tmb.cb.addLimitExceeded(mb, \"code_length\", 65536, codeLength);"
],
"header": "@@ -350,25 +350,36 @@ class CodeChunk {",
"removed": [
"\tvoid fixLengths(int maxStack, int maxLocals, int codeLength) {",
""
]
},
{
"added": [
"\tvoid complete(BCMethod mb, ClassHolder ch,",
"\t\t\tClassMember method, int maxStack, int maxLocals) {"
],
"header": "@@ -376,7 +387,8 @@ class CodeChunk {",
"removed": [
"\tvoid complete(ClassHolder ch, ClassMember method, int maxStack, int maxLocals) {"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/Conditional.java",
"hunks": [
{
"added": [
"\t/**",
"\t * Offset in the code stream of the 'if' opcode.",
"\t */"
],
"header": "@@ -43,6 +43,9 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": []
},
{
"added": [
"\tType[] startElse(BCMethod mb, CodeChunk chunk, Type[] thenStack) {"
],
"header": "@@ -70,7 +73,7 @@ class Conditional {",
"removed": [
"\tType[] startElse(CodeChunk chunk, Type[] thenStack) {"
]
},
{
"added": [
"\t\tfillIn(mb, chunk, ifOffset);"
],
"header": "@@ -78,7 +81,7 @@ class Conditional {",
"removed": [
"\t\tfillIn(chunk, ifOffset);"
]
},
{
"added": [
"\tConditional end(BCMethod mb, CodeChunk chunk, Type[] elseStack, int stackNumber) {",
"\t\t\tfillIn(mb, chunk, ifOffset);",
"\t\t\tfillIn(mb, chunk, thenGotoOffset);"
],
"header": "@@ -94,13 +97,13 @@ class Conditional {",
"removed": [
"\tConditional end(CodeChunk chunk, Type[] elseStack, int stackNumber) {",
"\t\t\tfillIn(chunk, ifOffset);",
"\t\t\tfillIn(chunk, thenGotoOffset);"
]
}
]
}
] |
derby-DERBY-176-51cefa2a
|
DERBY-176 DERBY-766 Modify pushing a long value in generated code to avoid
using constant pool entries if the long is within the range of a short.
Then use the I2L instruction to convert the int to a long. Also if
the long is within range of an int, then create a integer constant pool
entry and I2L to avoid using two constant pool slots.
Add some clarifying comments over the code length in complete.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@378744 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/BCMethod.java",
"hunks": [
{
"added": [
" // myCode.getPC() gives the code length since",
" // the program counter will be positioned after",
" // the last instruction. Note this value can",
" // be changed by the splitMethod call.",
" if (myCode.getPC() > CODE_SPLIT_LENGTH)",
" "
],
"header": "@@ -224,11 +224,14 @@ class BCMethod implements MethodBuilder {",
"removed": [
" int codeLength = myCode.getPC();",
" if (codeLength > CODE_SPLIT_LENGTH)",
" "
]
},
{
"added": [
" /**",
" * Push an integer value. Uses the special integer opcodes",
" * for the constants -1 to 5, BIPUSH for values that fit in",
" * a byte and SIPUSH for values that fit in a short. Otherwise",
" * uses LDC with a constant pool entry.",
" * ",
" * @param value Value to be pushed",
" * @param type Final type of the value.",
" */"
],
"header": "@@ -491,6 +494,15 @@ class BCMethod implements MethodBuilder {",
"removed": []
},
{
"added": [
"\t\tgrowStack(type.width(), type);",
" /**",
" * Push a long value onto the stack.",
" * For the values zero and one the LCONST_0 and",
" * LCONST_1 instructions are used.",
" * For values betwee Short.MIN_VALUE and Short.MAX_VALUE",
" * inclusive an byte/short/int value is pushed",
" * using push(int, Type) followed by an I2L instruction.",
" * This saves using a constant pool entry for such values.",
" * All other values use a constant pool entry. For values",
" * in the range of an Integer an integer constant pool",
" * entry is created to allow sharing with integer constants",
" * and to reduce constant pool slot entries.",
" */",
"\tpublic void push(long value) {",
" CodeChunk chunk = myCode;",
"",
" if (value == 0L || value == 1L) {",
" short opcode = value == 0L ? VMOpcode.LCONST_0 : VMOpcode.LCONST_1;",
" chunk.addInstr(opcode);",
" } else if (value >= Integer.MIN_VALUE && value <= Integer.MAX_VALUE) {",
" // the push(int, Type) method grows the stack for us.",
" push((int) value, Type.LONG);",
" chunk.addInstr(VMOpcode.I2L);",
" return;",
" } else {",
" int cpe = modClass.addConstant(value);",
" chunk.addInstrU2(VMOpcode.LDC2_W, cpe);",
" }",
" growStack(2, Type.LONG);",
" }"
],
"header": "@@ -505,23 +517,40 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\tgrowStack(1, type);",
"\tpublic void push(long value){",
"\t\tCodeChunk chunk = myCode;",
"",
"\t\tif (value == 0 || value == 1) {",
"\t\t\t\tchunk.addInstr((short)(VMOpcode.LCONST_0+(short)value));",
"\t\t}",
"\t\telse {",
"\t\t\tint cpe = modClass.addConstant(value);",
"\t\t\tchunk.addInstrU2(VMOpcode.LDC2_W, cpe);",
"\t\t}",
"\t\tgrowStack(2, Type.LONG);",
"",
"\t}"
]
}
]
}
] |
derby-DERBY-176-c8208e1e
|
DERBY-176 Improved version of the largeCodeGen test with looping based upon query element count
and a test for a large IN clause.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@354826 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-176-ee1cc94b
|
DERBY-176 DERBY-776 Add the initial utility code and split algorithm
to split a single generated method that execeeds the java virtual
machine limit of 65535 bytes of instructions. Allows the byte-code api
caller to generate code without worrying about exceeding the limit.
The initial split algorithm is the ability to split methods
that consist of multiple independent statements, seen by the stack
depth dropping to zero after a statement.
In the largeCodeGen test this change allowed the number of parameters
in the IN list query to increase from 3,400 to 97,000. The limit hit
at 98,000 was the number of constant pool entries.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@377609 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/BCMethod.java",
"hunks": [
{
"added": [
" ",
" /**",
" * Code length at which to split into sub-methods.",
" * Normally set to the maximim code length the",
" * JVM can support, but for testing the split code",
" * it can be reduced so that the standard tests",
" * cause some splitting. Tested with value set to 2000.",
" */",
" static final int CODE_SPLIT_LENGTH = VMOpcode.MAX_CODE_LENGTH;"
],
"header": "@@ -64,6 +64,15 @@ import java.io.IOException;",
"removed": []
},
{
"added": [
" /**",
" * Fast access for the parametes, will be null",
" * if the method has no parameters.",
" */",
"\tBCLocalField[] parameters; ",
" ",
" /**",
" * List of parameter types with java language class names.",
" * Can be null or zero length for no parameters.",
" */",
" private final String[] parameterTypes;",
" ",
" ",
"\tVector thrownExceptions; // expected to be names of Classes under Throwable"
],
"header": "@@ -75,8 +84,20 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\tprotected BCLocalField[] parameters; ",
"\tprotected Vector thrownExceptions; // expected to be names of Classes under Throwable"
]
},
{
"added": [
"\t\tif (parms != null && parms.length != 0) {"
],
"header": "@@ -118,7 +139,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\tif (parms != null) {"
]
},
{
"added": [
" ",
" parameterTypes = parms;"
],
"header": "@@ -142,6 +163,8 @@ class BCMethod implements MethodBuilder {",
"removed": []
},
{
"added": [
" ",
" ",
" int codeLength = myCode.getPC();",
" if (codeLength > CODE_SPLIT_LENGTH)",
" splitMethod();",
" ",
" // write exceptions attribute info",
" writeExceptions();",
" \t",
" ",
" /**",
" * Attempt to split a large method by pushing code out to several",
" * sub-methods. Performs a number of steps.",
" * <OL>",
" * <LI> Split at zero stack depth.",
" * <LI> Split at non-zero stack depth (FUTURE)",
" * </OL>",
" * ",
" * If the class has already exceeded some limit in building the",
" * class file format structures then don't attempt to split.",
" * Most likely the number of constant pool entries has been exceeded",
" * and thus the built class file no longer has integrity.",
" * The split code relies on being able to read the in-memory",
" * version of the class file in order to determine descriptors",
" * for methods and fields.",
" */",
" private void splitMethod() {",
" ",
" int split_pc = 0;",
" for (int codeLength = myCode.getPC();",
" (cb.limitMsg == null) &&",
" (codeLength > CODE_SPLIT_LENGTH);",
" codeLength = myCode.getPC())",
" {",
" int lengthToCheck = codeLength - split_pc;",
"",
" int optimalMinLength;",
" if (codeLength < CODE_SPLIT_LENGTH * 2) {",
" // minimum required",
" optimalMinLength = codeLength - CODE_SPLIT_LENGTH;",
" } else {",
" // try to split as much as possible",
" // need one for the return instruction",
" optimalMinLength = CODE_SPLIT_LENGTH - 1;",
" }",
"",
" if (optimalMinLength > lengthToCheck)",
" optimalMinLength = lengthToCheck;",
"",
" split_pc = myCode.splitZeroStack(this, modClass, split_pc,",
" lengthToCheck, optimalMinLength);",
"",
" // Negative split point returned means that no split",
" // was possible. Give up on this approach and goto",
" // the next approach (none-yet).",
" if (split_pc < 0) {",
" break;",
" }",
"",
" // success, continue on splitting after the call to the",
" // sub-method if the method still execeeds the maximum length.",
" }",
" }",
" * class interface",
" */",
" * In their giveCode methods, the parts of the method body will want to get",
" * to the constant pool to add their constants. We really only want them",
" * treating it like a constant pool inclusion mechanism, we could write a",
" * wrapper to limit it to that.",
" */"
],
"header": "@@ -200,24 +223,84 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\t// write exceptions attribute info",
"\t\twriteExceptions();",
"\t\t",
"\t * class interface",
"\t */",
"\t * In their giveCode methods, the parts of the method body",
"\t * will want to get to the constant pool to add their constants.",
"\t * We really only want them treating it like a constant pool",
"\t * inclusion mechanism, we could write a wrapper to limit it to that.",
"\t */"
]
},
{
"added": [
"\tint maxStack;"
],
"header": "@@ -290,7 +373,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\tprivate int maxStack;"
]
},
{
"added": [
"\t\t\taddInstrCPE(VMOpcode.LDC, cpe);"
],
"header": "@@ -420,7 +503,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\t\tchunk.addInstrCPE(VMOpcode.LDC, cpe);"
]
},
{
"added": [
"\t\t\taddInstrCPE(VMOpcode.LDC, cpe);"
],
"header": "@@ -458,7 +541,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\t\tchunk.addInstrCPE(VMOpcode.LDC, cpe);"
]
},
{
"added": [
"\t\taddInstrCPE(VMOpcode.LDC, cpe);"
],
"header": "@@ -477,7 +560,7 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\tmyCode.addInstrCPE(VMOpcode.LDC, cpe);"
]
},
{
"added": [
" ",
" /**",
" * Write a instruction that uses a constant pool entry",
" * as an operand, add a limit exceeded message if",
" * the number of constant pool entries has exceeded",
" * the limit.",
" */",
" private void addInstrCPE(short opcode, int cpe)",
" {",
" if (cpe >= VMOpcode.MAX_CONSTANT_POOL_ENTRIES)",
" cb.addLimitExceeded(this, \"constant_pool_count\",",
" VMOpcode.MAX_CONSTANT_POOL_ENTRIES, cpe);",
" ",
" myCode.addInstrCPE(opcode, cpe);",
" }"
],
"header": "@@ -1057,6 +1140,21 @@ class BCMethod implements MethodBuilder {",
"removed": []
},
{
"added": [
"\t\t{",
"\t\t}"
],
"header": "@@ -1072,7 +1170,9 @@ class BCMethod implements MethodBuilder {",
"removed": []
},
{
"added": [
"\t\t\t\t",
" \t\t",
"\t\tBCMethod subMethod = getNewSubMethod(myReturnType, false);",
" callSubMethod(subMethod);"
],
"header": "@@ -1141,43 +1241,22 @@ class BCMethod implements MethodBuilder {",
"removed": [
"\t\t",
"\t\t",
"\t\tint modifiers = myEntry.getModifier();\t",
"\t\t//System.out.println(\"NEED TO SPLIT \" + myEntry.getName() + \" \" + currentCodeSize + \" stack \" + stackDepth);",
"",
"\t\t// the sub-method can be private to ensure that no-one",
"\t\t// can call it accidentally from outside the class.",
"\t\tmodifiers &= ~(Modifier.PROTECTED | Modifier.PUBLIC);",
"\t\tmodifiers |= Modifier.PRIVATE;",
"\t\t",
"\t\tString subMethodName = myName + \"_s\" + Integer.toString(subMethodCount++);",
"\t\tBCMethod subMethod = (BCMethod) cb.newMethodBuilder(",
"\t\t\t\tmodifiers,",
"\t\t\t\tmyReturnType, subMethodName);",
"\t\tsubMethod.thrownExceptions = this.thrownExceptions;",
"\t\tshort op;",
"\t\tif ((modifiers & Modifier.STATIC) == 0)",
"\t\t{",
"\t\t\top = VMOpcode.INVOKEVIRTUAL;",
"\t\t\tthis.pushThis();",
"\t\t} else {",
"\t\t\top = VMOpcode.INVOKESTATIC;",
"\t\t}",
"\t\t",
"\t\tthis.callMethod(op, (String) null, subMethodName, myReturnType, 0);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/services/bytecode/CodeChunk.java",
"hunks": [
{
"added": [
"import java.lang.reflect.Modifier;"
],
"header": "@@ -32,6 +32,7 @@ import org.apache.derby.iapi.services.classfile.VMOpcode;",
"removed": []
},
{
"added": [
" int codeLength = getPC();"
],
"header": "@@ -733,7 +734,7 @@ final class CodeChunk {",
"removed": [
"\t\tint codeLength = getPC();"
]
},
{
"added": [
" ",
" ",
" "
],
"header": "@@ -919,13 +920,14 @@ final class CodeChunk {",
"removed": [
"",
""
]
},
{
"added": [
" "
],
"header": "@@ -993,8 +995,7 @@ final class CodeChunk {",
"removed": [
" // System.out.println(\"vmDescriptor\" + vmDescriptor);",
""
]
}
]
}
] |
derby-DERBY-1762-8bc31837
|
Add DatabasePropertyTestSetup decorator that sets and clears database properties.
Fix a bug in SystemPropertyTestSetup noticed while testing DatabasePropertyTestSetup.
Change ConcurrencyTest to use DatabasePropertyTestSetup to work around bug DERBY-1762.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@436653 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/SystemPropertyTestSetup.java",
"hunks": [
{
"added": [
"\tprivate Properties newValues;",
"\tprivate Properties oldValues;"
],
"header": "@@ -34,8 +34,8 @@ import junit.framework.Test;",
"removed": [
"\tprivate final Properties newValues;",
"\tprivate final Properties oldValues;"
]
},
{
"added": [
" newValues = null;",
" oldValues = null;"
],
"header": "@@ -78,6 +78,8 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": []
},
{
"added": [
" // set, might need to be changed.",
" change = !old.equals(value);",
" ",
" // If we are not processing the oldValues",
" // then store in the oldValues. Reference equality is ok here.",
" \t\t\tif (change && (values != oldValues))"
],
"header": "@@ -92,8 +94,12 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": [
" \t\t\t// set, might need to be changed.",
" \t\t\tif (change = !old.equals(value))"
]
}
]
}
] |
derby-DERBY-1764-0433f1a7
|
DERBY-1764 Rewrite stress.multi as a JUnit test
enable StressMultiTest.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@704259 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/SystemPropertyTestSetup.java",
"hunks": [
{
"added": [
"\tprivate boolean staticProperties;"
],
"header": "@@ -36,6 +36,7 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": []
},
{
"added": [
"\t\t\tProperties newValues,",
"\t\t\tboolean staticProperties)",
"\t\tthis.staticProperties = staticProperties;",
"\t/**",
"\t * Create a test decorator that sets and restores ",
"\t * System properties. Do not shutdown engine after",
"\t * setting properties",
"\t * @param test",
"\t * @param newValues",
"\t */",
"\tpublic SystemPropertyTestSetup(Test test,",
"\t\t\tProperties newValues)",
"\t{",
"\t\tsuper(test);",
"\t\tthis.newValues = newValues;",
"\t\tthis.oldValues = new Properties();",
"\t\tthis.staticProperties = false;",
"\t}"
],
"header": "@@ -45,13 +46,30 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": [
"\t\t\tProperties newValues)"
]
},
{
"added": [
" \t// shutdown engine so static properties take effect",
" \tif (staticProperties)",
" \t\tTestConfiguration.getCurrent().shutdownEngine();"
],
"header": "@@ -60,6 +78,9 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": []
},
{
"added": [
" \t// shutdown engine to restore any static properties",
" \tif (staticProperties)",
" \t\tTestConfiguration.getCurrent().shutdownEngine();"
],
"header": "@@ -78,6 +99,9 @@ public class SystemPropertyTestSetup extends TestSetup {",
"removed": []
}
]
}
] |
derby-DERBY-1764-bde05cd1
|
DERBY-1764 (partial) Rewrite stress.multi as a JUnit test
incremental improvements for error handling and change StressMulti50x59 should only do an embedded run.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@678051 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1767-69d1cb89
|
DERBY-1767: insertRow(), updateRow() and deleteRow() cannot handle
table names and column names containing double quotes
Patch contributed by Fernanda Pizzorno.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@441185 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/ResultSet.java",
"hunks": [
{
"added": [
" insertSQL.append(quoteSqlIdentifier(",
" resultSetMetaData_.getColumnName(column)));"
],
"header": "@@ -4392,7 +4392,8 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" insertSQL.append(\"\\\"\" + resultSetMetaData_.getColumnName(column) + \"\\\"\");"
]
},
{
"added": [
" updateString += quoteSqlIdentifier(",
" resultSetMetaData_.getColumnName(column)) + ",
" \" = ? \";"
],
"header": "@@ -4425,7 +4426,9 @@ public abstract class ResultSet implements java.sql.ResultSet,",
"removed": [
" updateString += \"\\\"\" + resultSetMetaData_.getColumnName(column) + \"\\\" = ? \";"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [
" insertSQL.append(quoteSqlIdentifier(",
" rd.getColumnDescriptor(i).getName()));"
],
"header": "@@ -3629,8 +3629,8 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" insertSQL.append(\"\\\"\" + ",
" rd.getColumnDescriptor(i).getName() + \"\\\"\");"
]
},
{
"added": [
" if (statementContext != null)",
" lcc.popStatementContext(statementContext, null);"
],
"header": "@@ -3677,6 +3677,8 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": []
},
{
"added": [
" updateWhereCurrentOfSQL.append(quoteSqlIdentifier(",
" rd.getColumnDescriptor(i).getName()) + \"=?\");",
" updateWhereCurrentOfSQL.append(\" WHERE CURRENT OF \" + ",
" quoteSqlIdentifier(getCursorName()));"
],
"header": "@@ -3721,12 +3723,14 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" updateWhereCurrentOfSQL.append(\"\\\"\" + rd.getColumnDescriptor(i).getName() + \"\\\"=?\");",
" updateWhereCurrentOfSQL.append(\" WHERE CURRENT OF \\\"\" + getCursorName() + \"\\\"\");"
]
},
{
"added": [
" ",
" LanguageConnectionContext lcc = null;",
" StatementContext statementContext = null;",
" ",
" deleteWhereCurrentOfSQL.append(\" WHERE CURRENT OF \" + ",
" quoteSqlIdentifier(getCursorName()));",
" lcc = getEmbedConnection().getLanguageConnection();",
" ",
" statementContext = lcc.pushStatementContext(isAtomic, false, deleteWhereCurrentOfSQL.toString(), null, false, 0L);"
],
"header": "@@ -3783,18 +3787,23 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" deleteWhereCurrentOfSQL.append(\" WHERE CURRENT OF \\\"\" + getCursorName() + \"\\\"\");",
"",
" LanguageConnectionContext lcc = getEmbedConnection().getLanguageConnection();",
" StatementContext statementContext = lcc.pushStatementContext(isAtomic, false, deleteWhereCurrentOfSQL.toString(), null, false, 0L);"
]
},
{
"added": [
" if (statementContext != null)",
" lcc.popStatementContext(statementContext, null);"
],
"header": "@@ -3815,6 +3824,8 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": []
}
]
}
] |
derby-DERBY-1768-a9988eb1
|
DERBY-1768: Removed empty JDBC 4.0 RowId classes
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@440321 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/jdbc/Driver40.java",
"hunks": [
{
"added": [],
"header": "@@ -32,7 +32,6 @@ import org.apache.derby.impl.jdbc.EmbedConnection;",
"removed": [
"import org.apache.derby.impl.jdbc.EmbedRowId;"
]
}
]
}
] |
derby-DERBY-177-8106edc9
|
DERBY-3850: Remove unneeded workarounds for DERBY-177 and DERBY-3693
Removed the wait parameter from methods called from
SPSDescriptor.updateSYSSTATEMENTS() since waiting is prevented by
another mechanism now.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@692495 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/DataDictionary.java",
"hunks": [
{
"added": [
"\t\tTransactionController\ttc"
],
"header": "@@ -1072,15 +1072,13 @@ public interface DataDictionary",
"removed": [
"\t * @param wait\t\t\tTo wait for lock or not",
"\t\tTransactionController\ttc,",
"\t\tboolean\t\t\t\t\twait"
]
},
{
"added": [],
"header": "@@ -1092,10 +1090,7 @@ public interface DataDictionary",
"removed": [
"\t * @param wait\t\tIf true, then the caller wants to wait for locks. False will be",
"\t * when we using a nested user xaction - we want to timeout right away if",
"\t * the parent holds the lock. (bug 4821)"
]
},
{
"added": [],
"header": "@@ -1104,7 +1099,6 @@ public interface DataDictionary",
"removed": [
"\t\t\tboolean\t\t\t\t\twait,"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/SPSDescriptor.java",
"hunks": [
{
"added": [],
"header": "@@ -1072,10 +1072,7 @@ public class SPSDescriptor extends TupleDescriptor",
"removed": [
"\t\tint[] \t\t\t\t\tcolsToUpdate;",
"\t\t//bug 4821 - we want to wait for locks if updating sysstatements on parent transaction",
"\t\tboolean wait = false;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t\tint insertRetCode = ti.insertRow(row, tc);"
],
"header": "@@ -1789,24 +1789,13 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t{",
"\t\taddDescriptor(td, parent, catalogNumber, duplicatesAllowed, tc, true);",
"\t}",
"",
"\t/**",
"\t * @inheritDoc",
"\t */",
"\tpublic void addDescriptor(TupleDescriptor td, TupleDescriptor parent,",
"\t\t\t\t\t\t\t int catalogNumber, boolean duplicatesAllowed,",
"\t\t\t\t\t\t\t TransactionController tc, boolean wait)",
"\t\tthrows StandardException",
"\t\tint insertRetCode = ti.insertRow(row, tc, wait);"
]
},
{
"added": [],
"header": "@@ -3377,9 +3366,6 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t * @param wait\t\tIf true, then the caller wants to wait for locks. False will be",
"\t * when we using a nested user xaction - we want to timeout right away if the parent",
"\t * holds the lock. (bug 4821)"
]
},
{
"added": [
"\t\t\t\t\t\t\t\t\t\tTransactionController tc)"
],
"header": "@@ -3387,8 +3373,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\t\t\t\t\t\t\tTransactionController tc,",
"\t\t\t\t\t\t\t\t\t\tboolean wait)"
]
},
{
"added": [
"\t\t\t\t\t tc);"
],
"header": "@@ -3458,8 +3443,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\t\t tc,",
"\t\t\t\t\t wait);"
]
},
{
"added": [
"\t\tTransactionController\ttc"
],
"header": "@@ -3959,8 +3943,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tTransactionController\ttc,",
"\t\tboolean wait"
]
},
{
"added": [
"\t\t\tinsertRetCode = ti.insertRow(row, tc);"
],
"header": "@@ -3982,7 +3965,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\tinsertRetCode = ti.insertRow(row, tc, wait);"
]
},
{
"added": [
"\t\taddSPSParams(descriptor, tc);",
"\tprivate void addSPSParams(SPSDescriptor spsd, TransactionController tc)"
],
"header": "@@ -3995,14 +3978,14 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\taddSPSParams(descriptor, tc, wait);",
"\tprivate void addSPSParams(SPSDescriptor spsd, TransactionController tc, boolean wait)"
]
},
{
"added": [
"\t\t\t\t\t\t tc);"
],
"header": "@@ -4034,7 +4017,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\t\t\t tc, wait);"
]
},
{
"added": [],
"header": "@@ -4079,13 +4062,9 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t * @param wait\t\tIf true, then the caller wants to wait for locks. False will be",
"\t * when we using a nested user xaction - we want to timeout right away if the parent",
"\t * holds the lock. (bug 4821)",
"\t *"
]
},
{
"added": [],
"header": "@@ -4093,14 +4072,12 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\tboolean\t\t\t\t\twait,",
"\t\tDataValueDescriptor\t\t\tcolumnNameOrderable;"
]
},
{
"added": [
"\t\t\t\t\t tc);"
],
"header": "@@ -4148,8 +4125,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\t\t tc,",
"\t\t\t\t\t wait);"
]
},
{
"added": [
"\t\t\taddSPSParams(spsd, tc);"
],
"header": "@@ -4180,7 +4156,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\taddSPSParams(spsd, tc, wait);"
]
},
{
"added": [
"\t\t\t\t\t\t\t\t\t tc);"
],
"header": "@@ -4220,8 +4196,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\t\t\t\t\t\t tc,",
"\t\t\t\t\t\t\t\t\t wait);"
]
},
{
"added": [
"\t\tti.insertRow(row, tc);"
],
"header": "@@ -5796,7 +5771,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tti.insertRow(row, tc, true);"
]
},
{
"added": [
"\t\t\t\t\t\t\t\ttc);"
],
"header": "@@ -7309,8 +7284,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\t\t\t\t\ttc,",
"\t\t\t\t\t\t\t\ttrue);"
]
},
{
"added": [
"\t\tint insertRetCode = ti.insertRow(row, tc);"
],
"header": "@@ -7949,7 +7923,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tint insertRetCode = ti.insertRow(row, tc, true);"
]
},
{
"added": [
"\t\t\taddSPSDescriptor(spsd, tc);"
],
"header": "@@ -9388,7 +9362,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\taddSPSDescriptor(spsd, tc, true);"
]
},
{
"added": [
" int insertRetCode = ti.insertRow(row, tc);"
],
"header": "@@ -11834,7 +11808,7 @@ public final class\tDataDictionaryImpl",
"removed": [
" int insertRetCode = ti.insertRow(row, tc, true /* wait */);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/TabInfoImpl.java",
"hunks": [
{
"added": [
"\tint insertRow( ExecRow row, TransactionController tc)",
"\t\treturn insertRowListImpl(new ExecRow[] {row},tc,notUsed);"
],
"header": "@@ -413,19 +413,18 @@ class TabInfoImpl",
"removed": [
"\t *\t@param\twait\t\tto wait on lock or quickly TIMEOUT",
"\tint insertRow( ExecRow row, TransactionController tc, boolean wait)",
"\t\treturn insertRowListImpl(new ExecRow[] {row},tc,notUsed, wait);"
]
},
{
"added": [
"\t\treturn insertRowListImpl(rowList,tc,notUsed);"
],
"header": "@@ -446,7 +445,7 @@ class TabInfoImpl",
"removed": [
"\t\treturn insertRowListImpl(rowList,tc,notUsed, true);"
]
},
{
"added": [
"\tprivate int insertRowListImpl(ExecRow[] rowList, TransactionController tc,",
" RowLocation[] rowLocationOut)"
],
"header": "@@ -461,12 +460,11 @@ class TabInfoImpl",
"removed": [
"\t @param wait to wait on lock or quickly TIMEOUT",
"\tprivate int insertRowListImpl(ExecRow[] rowList, TransactionController tc, RowLocation[] rowLocationOut,",
"\t\t\t\t\t\t\t\t boolean wait)"
]
},
{
"added": [
"\t\t\t\tTransactionController.OPENMODE_FORUPDATE,"
],
"header": "@@ -482,8 +480,7 @@ class TabInfoImpl",
"removed": [
"\t\t\t\t(TransactionController.OPENMODE_FORUPDATE |",
" ((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT)),"
]
},
{
"added": [
"\t\t\t\t\t\tTransactionController.OPENMODE_FORUPDATE,"
],
"header": "@@ -504,8 +501,7 @@ class TabInfoImpl",
"removed": [
"\t\t\t\t\t\t(TransactionController.OPENMODE_FORUPDATE |",
" \t\t((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT)),"
]
}
]
},
{
"file": "java/storeless/org/apache/derby/impl/storeless/EmptyDictionary.java",
"hunks": [
{
"added": [
"\t\t\tTransactionController tc) throws StandardException {",
"\t\t\tboolean recompile, boolean updateSYSCOLUMNS,"
],
"header": "@@ -450,13 +450,13 @@ public class EmptyDictionary implements DataDictionary, ModuleSupportable {",
"removed": [
"\t\t\tTransactionController tc, boolean wait) throws StandardException {",
"\t\t\tboolean recompile, boolean updateSYSCOLUMNS, boolean wait,"
]
},
{
"added": [],
"header": "@@ -794,12 +794,6 @@ public class EmptyDictionary implements DataDictionary, ModuleSupportable {",
"removed": [
"\tpublic void addDescriptor(TupleDescriptor tuple, TupleDescriptor parent,",
"\t\t\tint catalogNumber, boolean allowsDuplicates,",
"\t\t\tTransactionController tc, boolean wait) throws StandardException {",
"\t}",
"",
""
]
}
]
}
] |
derby-DERBY-177-8c502aef
|
Reduced derby.locks.waitTimeout for UpdatableResultSetTest because one
of the test cases gets a lock timeout in an internal transaction
(compilation of a trigger statement). The hang is probably caused by
DERBY-177.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@544155 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-177-a9215529
|
DERBY-3850: Remove unneeded workarounds for DERBY-177 and DERBY-3693
Removed the wait parameter from TabInfoImpl.updateRow(). The method
only had two callers, both of which called it with
wait=true. updateRow() passed the parameter on to openForUpdate() in
RowChanger, but that method is sometimes called with wait=false, so
the parameter couldn't be removed from that method.
Also removed an unused variable and some unused imports.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@695244 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/TabInfoImpl.java",
"hunks": [
{
"added": [],
"header": "@@ -22,18 +22,14 @@",
"removed": [
"import org.apache.derby.iapi.services.context.ContextService;",
"import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;",
"import org.apache.derby.iapi.sql.execute.ExecutionContext;",
"import org.apache.derby.iapi.sql.execute.ExecutionFactory;"
]
},
{
"added": [],
"header": "@@ -46,11 +42,8 @@ import org.apache.derby.iapi.store.access.StaticCompiledOpenConglomInfo;",
"removed": [
"import org.apache.derby.iapi.types.DataValueFactory;",
"import org.apache.derby.catalog.UUID;",
"import java.util.Enumeration;"
]
},
{
"added": [
"\t\tupdateRow(key, newRows, indexNumber, indicesToUpdate, colsToUpdate, tc);"
],
"header": "@@ -936,7 +929,7 @@ class TabInfoImpl",
"removed": [
"\t\tupdateRow(key, newRows, indexNumber, indicesToUpdate, colsToUpdate, tc, true);"
]
},
{
"added": [],
"header": "@@ -963,46 +956,11 @@ class TabInfoImpl",
"removed": [
"\t{",
"\t\tupdateRow(key, newRows, indexNumber, indicesToUpdate, colsToUpdate, tc, true);",
"\t}",
"",
"\t/**",
"\t * Updates a set of base rows in a catalog with the same key on an index",
"\t * and updates all the corresponding index rows. If parameter wait is true,",
"\t * then the caller wants to wait for locks. When using a nested user xaction",
"\t * we want to timeout right away if the parent holds the lock.",
"\t *",
"\t *\t@param\tkey\t\t\tkey row",
"\t *\t@param\tnewRows\t\tnew version of the array of rows",
"\t *\t@param\tindexNumber\tindex that key operates",
"\t *\t@param\tindicesToUpdate\tarray of booleans, one for each index on the catalog.",
"\t *\t\t\t\t\t\t\tif a boolean is true, that means we must update the",
"\t *\t\t\t\t\t\t\tcorresponding index because changes in the newRow",
"\t *\t\t\t\t\t\t\taffect it.",
"\t *\t@param colsToUpdate\tarray of ints indicating which columns (1 based)",
"\t *\t\t\t\t\t\t\tto update. If null, do all.",
"\t *\t@param\ttc\t\t\ttransaction controller",
"\t *\t@param wait\t\tIf true, then the caller wants to wait for locks. When",
"\t *\t\t\t\t\t\t\tusing a nested user xaction we want to timeout right away",
"\t *\t\t\t\t\t\t\tif the parent holds the lock. (bug 4821)",
"\t *",
"\t * @exception StandardException\t\tThrown on failure",
"\t */",
"\tprivate void updateRow( ExecIndexRow\t\t\t\tkey,",
"\t\t\t\t\t\t ExecRow[]\t\t\t\tnewRows,",
"\t\t\t\t\t\t int\t\t\t\t\t\tindexNumber,",
"\t\t\t\t\t\t boolean[]\t\t\t\tindicesToUpdate,",
"\t\t\t\t\t\t int[]\t\t\t\t\tcolsToUpdate,",
"\t\t\t\t\t\t TransactionController\ttc,",
"\t\t\t\t\t\t boolean wait)",
"\t\tthrows StandardException",
"\t\tExecIndexRow\t\t\t\ttemplateRow;"
]
},
{
"added": [
"\t\trc.openForUpdate(indicesToUpdate, TransactionController.MODE_RECORD, true);",
" TransactionController.OPENMODE_FORUPDATE,",
"\t\t\tTransactionController.OPENMODE_FORUPDATE,"
],
"header": "@@ -1014,22 +972,20 @@ class TabInfoImpl",
"removed": [
"\t\trc.openForUpdate(indicesToUpdate, TransactionController.MODE_RECORD, wait); ",
" (TransactionController.OPENMODE_FORUPDATE |",
" ((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT)),",
"\t\t\t(TransactionController.OPENMODE_FORUPDATE |",
" ((wait) ? 0 : TransactionController.OPENMODE_LOCK_NOWAIT)), "
]
}
]
}
] |
derby-DERBY-1773-b9e22cc3
|
DERBY-1773: cursor updates fail with syntax error when column has an alias
This change enhances the runtime column analysis code so that an updatable
cursor can make a more nuanced decision about whether a column update is
or is not allowed.
Specifically, certain columns may not be updated, if they have been aliased.
Prior to this change, a confusing syntax error message would be delivered
when attempting to update an aliased column. Now, a more clear error message
is delivered, pointing at the fact that the aliased column is not in the
FOR UPDATE list of the cursor.
So the net result is (at least, should be) that the same set of queries are
accepted, but those that are not accepted have a slightly more clear message
issued when they are detected.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1734744 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumnList.java",
"hunks": [
{
"added": [
" \t/**",
"\t * Return true if some columns in this list are updatable.",
"\t *",
"\t * @return\ttrue if any column in list is updatable, else false",
"\t */",
"\tboolean columnsAreUpdatable()",
"\t{",
"\t\tfor (ResultColumn rc : this)",
"\t\t{",
"\t\t\tif (rc.isUpdatable())",
"\t\t\t\treturn true;",
"\t\t}",
"\t\treturn false;",
"\t}",
"\t\t"
],
"header": "@@ -523,6 +523,21 @@ class ResultColumnList extends QueryTreeNodeVector<ResultColumn>",
"removed": []
}
]
}
] |
derby-DERBY-1777-7e5c6699
|
DERBY-1777: Commit Army's d1777_v2.patch, cleaning up an NPE in the optimizer.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@446924 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/OptimizerImpl.java",
"hunks": [
{
"added": [
"\t\t\t\t/* If we already assigned at least one position in the",
"\t\t\t\t * join order when this happened (i.e. if joinPosition",
"\t\t\t\t * is greater than *or equal* to zero; DERBY-1777), then ",
"\t\t\t\t * reset the join order before jumping. The call to",
"\t\t\t\t * rewindJoinOrder() here will put joinPosition back",
"\t\t\t\t * to 0. But that said, we'll then end up incrementing",
"\t\t\t\t * joinPosition before we start looking for the next",
"\t\t\t\t * join order (see below), which means we need to set",
"\t\t\t\t * it to -1 here so that it gets incremented to \"0\" and",
"\t\t\t\t * then processing can continue as normal from there. ",
"\t\t\t\t * Note: we don't need to set reloadBestPlan to true",
"\t\t\t\t * here because we only get here if we have *not* found",
"\t\t\t\t * a best plan yet.",
"\t\t\t\t */",
"\t\t\t\tif (joinPosition >= 0)"
],
"header": "@@ -454,18 +454,21 @@ public class OptimizerImpl implements Optimizer",
"removed": [
"\t\t\t\t// If we were in the middle of a join order when this",
"\t\t\t\t// happened, then reset the join order before jumping.",
"\t\t\t\t// The call to rewindJoinOrder() here will put joinPosition",
"\t\t\t\t// back to 0. But that said, we'll then end up incrementing ",
"\t\t\t\t// joinPosition before we start looking for the next join",
"\t\t\t\t// order (see below), which means we need to set it to -1",
"\t\t\t\t// here so that it gets incremented to \"0\" and then",
"\t\t\t\t// processing can continue as normal from there. Note:",
"\t\t\t\t// we don't need to set reloadBestPlan to true here",
"\t\t\t\t// because we only get here if we have *not* found a",
"\t\t\t\t// best plan yet.",
"\t\t\t\tif (joinPosition > 0)"
]
}
]
}
] |
derby-DERBY-1784-b6c6e95c
|
DERBY-1784
contributed by Yip Ng
After studying the
compiler code abit more, I found that DML statements such as INSERT, UPDATE and DELETE also suffer from the same problem (they use different bind logic)
With that said, this patch attempts to address all the stated problems above
when column reference is qualified with a synonym table name.
The fundamental problem is that Derby does not keep the original unbound table
name around once the synonym is resolved. So the fix is to address this case
and apply the qualification properly.
In the VIEW resolution case, the system needs to preserve the synonym name as
VIEW gets expanded to a subquery, the name to be set should be the exposed
name of the table and not the resolved table name.
For * expansion in the SELECT list, if the FROM clause happens to be a synonym,
the system should prepend it with the unbound name and not the resolved table
name. This way the binding logic is normalized.
For DML cases, the synonym name needs to be normalized to its base table so that
setColumnDescriptor can apply correctly. When the system binds the expression
for this result column, it will resolve this properly since the column binding
logic are in the respective FromTable subclasses implementation where they
will use the exposed name this time to check for qualification.
I wrote more testcases for synonym.sql but I found out that this SQL file is
actually not part of derbylang suite, so the patch added this back to the test
bucket.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@447469 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DMLModStatementNode.java",
"hunks": [
{
"added": [
"\tprotected TableName synonymTableName;",
"\t"
],
"header": "@@ -128,7 +128,8 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": [
""
]
},
{
"added": [
"\t\t\t\tsynonymTableName = targetTableName;"
],
"header": "@@ -229,6 +230,7 @@ abstract class DMLModStatementNode extends DMLStatementNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java",
"hunks": [
{
"added": [
"\t\tsetOrigTableName(this.tableName);"
],
"header": "@@ -230,6 +230,7 @@ public class FromBaseTable extends FromTable",
"removed": []
},
{
"added": [
"\t\t\t\t\t(correlationName != null) ? ",
" correlationName : getOrigTableName().getTableName(), "
],
"header": "@@ -2217,7 +2218,8 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t\t\t\t(correlationName != null) ? correlationName : tableName.getTableName(), "
]
},
{
"added": [
"\t\t\t\tfsq.setOrigTableName(this.getOrigTableName());"
],
"header": "@@ -2230,6 +2232,7 @@ public class FromBaseTable extends FromTable",
"removed": []
},
{
"added": [
"\t\t\t"
],
"header": "@@ -2389,6 +2392,7 @@ public class FromBaseTable extends FromTable",
"removed": []
},
{
"added": [
" exposedTableName = getExposedTableName();"
],
"header": "@@ -2466,14 +2470,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\tif (correlationName != null)",
"\t\t{",
"\t\t\texposedTableName = makeTableName(null, correlationName);",
"\t\t}",
"\t\telse",
"\t\t{",
"\t\t\texposedTableName = tableName;",
"\t\t}"
]
},
{
"added": [
"\t * Get the exposed name for this table, which is the name that can",
"\t * be used to refer to it in the rest of the query.",
"\t *",
"\t * @return\tThe exposed name of this table.",
"\tpublic String getExposedName() ",
"\t\t\treturn getOrigTableName().getFullTableName();",
"\t",
"\t/**",
"\t * Get the exposed table name for this table, which is the name that can",
"\t * be used to refer to it in the rest of the query.",
"\t *",
"\t * @return\tTableName The exposed name of this table.",
"\t *",
"\t * @exception StandardException Thrown on error",
"\t */",
"\tprivate TableName getExposedTableName() throws StandardException ",
"\t{",
"\t\tif (correlationName != null)",
"\t\t\treturn makeTableName(null, correlationName);",
"\t\telse",
"\t\t\treturn getOrigTableName();",
"\t}",
"\t"
],
"header": "@@ -3426,20 +3423,36 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t * Return the exposed name for this table, which is the name that",
"\t * can be used to refer to this table in the rest of the query.",
"\t * @return\tThe exposed name for this table.",
"",
"\tpublic String getExposedName()",
"\t\t\treturn tableName.getFullTableName();",
""
]
},
{
"added": [
"\t\treturn getResultColumnsForList(allTableName, resultColumns, ",
"\t\t\t\tgetOrigTableName());"
],
"header": "@@ -3466,7 +3479,8 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\treturn getResultColumnsForList(allTableName, resultColumns, tableName);"
]
},
{
"added": [
"\t\texposedName = getExposedTableName();"
],
"header": "@@ -3491,14 +3505,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\tif (correlationName == null)",
"\t\t{",
"\t\t\texposedName = tableName;",
"\t\t}",
"\t\telse",
"\t\t{",
"\t\t\texposedName = makeTableName(null, correlationName);",
"\t\t}"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromTable.java",
"hunks": [
{
"added": [
"\t/** the original unbound table name */",
"\tprotected TableName origTableName;",
"\t"
],
"header": "@@ -118,6 +118,9 @@ abstract class FromTable extends ResultSetNode implements Optimizable",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/InsertNode.java",
"hunks": [
{
"added": [
"\t\t\t/*",
"\t\t\t * Normalize synonym qualifers for column references.",
"\t\t\t */",
"\t\t\tif (synonymTableName != null)",
"\t\t\t{",
"\t\t\t\tnormalizeSynonymColumns ( targetColumnList, targetTableName );",
"\t\t\t}",
"\t\t\t"
],
"header": "@@ -264,6 +264,14 @@ public final class InsertNode extends DMLModStatementNode",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/UpdateNode.java",
"hunks": [
{
"added": [
"\t\t\t{",
"\t\t\t\tthis.synonymTableName = targetTableName;",
"\t\t\t\tthis.targetTableName = synonymTab;",
"\t\t\t}"
],
"header": "@@ -208,7 +208,10 @@ public final class UpdateNode extends DMLModStatementNode",
"removed": [
"\t\t\t\tthis.targetTableName = synonymTab;"
]
},
{
"added": [
"\t\t/* Normalize the SET clause's result column list for synonym */",
"\t\tif (synonymTableName != null)",
"\t\t\tnormalizeSynonymColumns( resultSet.resultColumns, targetTable );",
"\t\t"
],
"header": "@@ -342,6 +345,10 @@ public final class UpdateNode extends DMLModStatementNode",
"removed": []
}
]
}
] |
derby-DERBY-1785-9082f658
|
DERBY-1785
contributed by Myrna van Lunteren
patch: DERBY-1785_20061007.diff
Attaching a band-aid patch for this issue. I chose to comment out the method
rather than remove as a way to document the quirky behavior.
Having the method setSecurityProps overload the one in jvm.java causes
problems when running the junit tests - they *do* successfully run with
securityManager.
Foundation class tests actually run ok with security manager - except when
useprocess is false. This is caused by a bug in the jvm. See also DERBY-885.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@462607 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/functionTests/harness/j9_foundation.java",
"hunks": [
{
"added": [
"// Having the following method overload the one in jvm.java causes problems when running",
"// the junit tests - they *do* successfully run with securityManager.",
"// Foundation class tests actually run ok with security manager - except when useprocess",
"// is false. This is caused by a bug in the jvm. See also DERBY-885 and DERBY-1785.",
"//\tprotected void setSecurityProps()",
"//\t{",
"//\t\tSystem.out.println(\"Note: J9 (foundation) tests do not run with security manager\");\t\t",
"//\t}"
],
"header": "@@ -128,9 +128,13 @@ public class j9_foundation extends jvm {",
"removed": [
"\tprotected void setSecurityProps()",
"\t{",
"\t\tSystem.out.println(\"Note: J9 (foundation) tests do not run with security manager\");\t\t",
"\t}"
]
}
]
}
] |
derby-DERBY-1786-9e2a7491
|
DERBY-1786 (a crash during re-encryption may cause an unrecoverable db)
The problem was when transaction log spans more than one log file during (re)
encryption of the database and if there is a crash just before switching the
database to use the new encryption properties; On recovery checkpoint in the
first log file is used as reference and the next log file is assumed to have
the commit log record for (re) encryption and deleted incorrectly to force the
roll-back , which lead to the incomplete rollback of re-encryption. And that
caused recovery failures on next (re) encryption crashed.
This patch fixes the problem by ensuring there a checkpoint record in the
last log file before creating a new log file with new encryption properties
and writing the commit log record. Log is also flushed before making the
transaction log use the new encryption key to avoid any part of old log
records in the buffers getting encrypted with the new encryption key.
While working on this problem , I noticed error message thrown incase of
re-encryption failures are confusing, added a new error message to indicate
failures specific to (re) encryption.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@442647 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/store/raw/RawStore.java",
"hunks": [
{
"added": [
" logFactory.setDatabaseEncrypted(false);"
],
"header": "@@ -302,7 +302,7 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" logFactory.setDatabaseEncrypted();"
]
},
{
"added": [
" private void crashOnDebugFlag(String debugFlag, ",
" boolean reEncrypt) "
],
"header": "@@ -1398,7 +1398,8 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" private void crashOnDebugFlag(String debugFlag) "
]
},
{
"added": [
" StandardException se = StandardException.newException(",
" (reEncrypt ? SQLState.DATABASE_REENCRYPTION_FAILED :",
" SQLState.DATABASE_ENCRYPTION_FAILED),",
" debugFlag);",
" markCorrupt(se);",
" throw se;"
],
"header": "@@ -1407,11 +1408,12 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" StandardException se= StandardException.newException(",
" SQLState.LOG_IO_ERROR, ",
" new IOException(debugFlag));",
" markCorrupt(se);",
" throw se;"
]
},
{
"added": [
"",
" try ",
"\t\t{",
"\t\t\t",
"",
" // all the containers are (re) encrypted, now mark the database as",
" // encrypted if a plain database is getting configured for encryption",
" // or update the encryption the properties, in the ",
" // service.properties ..etc.",
"",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(TEST_REENCRYPT_CRASH_BEFORE_COMMT, reEncrypt);",
" // check if the checkpoint is currently in the last log file, ",
" // otherwise force a checkpoint and then do a log switch, ",
" // after setting up a new encryption key",
" if (!logFactory.isCheckpointInLastLogFile()) ",
" {",
" // perfrom a checkpoint, this is a reference checkpoint ",
" // to find if the re(encryption) is complete. ",
" logFactory.checkpoint(this, dataFactory, xactFactory, true);",
" }",
" encryptDatabase = false;",
" // let the log factory know that database is ",
" // (re) encrypted and ask it to flush the log, ",
" // before enabling encryption of the log with ",
" // the new key.",
" logFactory.setDatabaseEncrypted(true);",
" ",
" // let the log factory and data factory know that ",
" // database is encrypted.",
" if (!reEncrypt) {",
" // mark in the raw store that the database is ",
" // encrypted. ",
" databaseEncrypted = true;",
" dataFactory.setDatabaseEncrypted();",
" } else {",
" // switch the encryption/decryption engine to the new ones.",
" decryptionEngine = newDecryptionEngine; ",
" encryptionEngine = newEncryptionEngine;",
" currentCipherFactory = newCipherFactory;",
" }",
" ",
" // make the log factory ready to encrypt",
" // the transaction log with the new encryption ",
" // key by switching to a new log file. ",
" // If re-encryption is aborted for any reason, ",
" // this new log file will be deleted, during",
" // recovery.",
"",
" logFactory.startNewLogFile();",
"",
" // mark that re-encryption is in progress in the ",
" // service.properties, so that (re) encryption ",
" // changes that can not be undone using the transaction ",
" // log can be un-done before recovery starts.",
" // (like the changes to service.properties and ",
" // any log files the can not be understood by the",
" // old encryption key), incase engine crashes",
" // after this point. ",
"",
" // if the crash occurs before this point, recovery",
" // will rollback the changes using the transaction ",
" // log.",
"",
" properties.put(RawStoreFactory.DB_ENCRYPTION_STATUS,",
" String.valueOf(",
" if (reEncrypt) ",
" {",
" // incase re-encryption, save the old ",
" // encryption related properties, before",
" // doing updates with new values.",
" if (externalKeyEncryption) ",
" {",
" // save the current copy of verify key file.",
" StorageFile verifyKeyFile = ",
" storageFactory.newStorageFile(",
" StorageFile oldVerifyKeyFile = ",
" storageFactory.newStorageFile(",
" if(!privCopyFile(verifyKeyFile, oldVerifyKeyFile))",
" throw StandardException.",
" newException(SQLState.RAWSTORE_ERROR_COPYING_FILE,",
" verifyKeyFile, oldVerifyKeyFile); ",
" // update the verify key file with the new key info.",
" currentCipherFactory.verifyKey(reEncrypt, ",
" storageFactory, ",
" properties);",
" // save the current generated encryption key ",
" String keyString = ",
" properties.getProperty(",
" RawStoreFactory.ENCRYPTED_KEY);",
" if (keyString != null)",
" properties.put(RawStoreFactory.OLD_ENCRYPTED_KEY,",
" keyString);",
" } else ",
" {",
" // save the encryption block size;",
" properties.put(RawStoreFactory.ENCRYPTION_BLOCKSIZE,",
" String.valueOf(encryptionBlockSize));",
" }",
" // save the new encryption properties into service.properties",
" currentCipherFactory.saveProperties(properties) ;",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(",
" TEST_REENCRYPT_CRASH_AFTER_SWITCH_TO_NEWKEY,",
" reEncrypt);",
" }",
" // commit the transaction that is used to ",
" // (re) encrypt the database. Note that ",
" // this will be logged with newly generated ",
" // encryption key in the new log file created ",
" // above.",
" transaction.commit();",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(TEST_REENCRYPT_CRASH_AFTER_COMMT, ",
" reEncrypt);",
" }",
" // force the checkpoint with new encryption key.",
" logFactory.checkpoint(this, dataFactory, xactFactory, true);",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(TEST_REENCRYPT_CRASH_AFTER_CHECKPOINT, ",
" reEncrypt);",
" }",
" // once the checkpont makes it to the log, re-encrption ",
" // is complete. only cleanup is remaining ; update the ",
" // re-encryption status flag to cleanup. ",
" properties.put(RawStoreFactory.DB_ENCRYPTION_STATUS,",
" String.valueOf(",
" // database is (re)encrypted successfuly, ",
" // remove the old version of the container files.",
" dataFactory.removeOldVersionOfContainers(false);",
" if (reEncrypt) ",
" {",
" if (externalKeyEncryption)",
" // remove the saved copy of the verify.key file",
" StorageFile oldVerifyKeyFile = ",
" RawStoreFactory.CRYPTO_OLD_EXTERNAL_KEY_VERIFY_FILE);",
" if (!privDelete(oldVerifyKeyFile))",
" throw StandardException.newException(",
" } else ",
" {",
" // remove the old encryption key property.",
" properties.remove(RawStoreFactory.OLD_ENCRYPTED_KEY);",
" }",
" // (re) encrypion is done, remove the (re) ",
" // encryption status property. ",
" properties.remove(RawStoreFactory.DB_ENCRYPTION_STATUS);",
"",
" // close the transaction. ",
" transaction.close(); ",
" } catch (StandardException se) {",
"",
" throw StandardException.newException(",
" (reEncrypt ? SQLState.DATABASE_REENCRYPTION_FAILED :",
" SQLState.DATABASE_ENCRYPTION_FAILED),",
" se,",
" se.getMessage()); ",
" } finally {",
" // clear the new encryption engines."
],
"header": "@@ -1488,178 +1490,202 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" boolean error = true;",
" try {",
" error = false;",
" }finally {",
" // if (re) encryption failed, abort the transaction.",
" if (error) { ",
" transaction.abort();",
" else {",
" // (re) encryption of all the containers is complete ",
" // update the encryption properties in the ",
" // service.properties ..etc.",
"",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(TEST_REENCRYPT_CRASH_BEFORE_COMMT);",
" }",
" // let the log factory and data factory know that ",
" // database is encrypted.",
" if (!reEncrypt) {",
" // mark in the raw store that the database is ",
" // encrypted. ",
" encryptDatabase = false;",
" databaseEncrypted = true;",
" dataFactory.setDatabaseEncrypted();",
" logFactory.setDatabaseEncrypted();",
"",
" } else {",
" // switch the encryption/decryption engine to the new ones.",
" decryptionEngine = newDecryptionEngine; ",
" encryptionEngine = newEncryptionEngine;",
" currentCipherFactory = newCipherFactory;",
" }",
" ",
" // make the log factory ready to encrypt",
" // the transaction log with the new encryption ",
" // key by switching to a new log file. ",
" // If re-encryption is aborted for any reason, ",
" // this new log file will be deleted, during",
" // recovery.",
"",
" logFactory.startNewLogFile();",
"",
" // mark that re-encryption is in progress in the ",
" // service.properties, so that (re) encryption ",
" // changes that can not be undone using the transaction ",
" // log can be un-done before recovery starts.",
" // (like the changes to service.properties and ",
" // any log files the can not be understood by the",
" // old encryption key), incase engine crashes",
" // after this point. ",
"",
" // if the crash occurs before this point, recovery",
" // will rollback the changes using the transaction ",
" // log.",
" properties.put(RawStoreFactory.DB_ENCRYPTION_STATUS,",
" String.valueOf(",
" if (reEncrypt) ",
" {",
" // incase re-encryption, save the old ",
" // encryption related properties, before",
" // doing updates with new values.",
" if (externalKeyEncryption) ",
" {",
" // save the current copy of verify key file.",
" StorageFile verifyKeyFile = ",
" storageFactory.newStorageFile(",
" StorageFile oldVerifyKeyFile = ",
" storageFactory.newStorageFile(",
" if(!privCopyFile(verifyKeyFile, oldVerifyKeyFile))",
" throw StandardException.",
" newException(SQLState.RAWSTORE_ERROR_COPYING_FILE,",
" verifyKeyFile, oldVerifyKeyFile); ",
" // update the verify key file with the new key info.",
" currentCipherFactory.verifyKey(reEncrypt, ",
" storageFactory, ",
" properties);",
" } else ",
" {",
" // save the current generated encryption key ",
" String keyString = ",
" properties.getProperty(",
" RawStoreFactory.ENCRYPTED_KEY);",
" if (keyString != null)",
" properties.put(RawStoreFactory.OLD_ENCRYPTED_KEY,",
" keyString);",
" }",
" // save the encryption block size;",
" properties.put(RawStoreFactory.ENCRYPTION_BLOCKSIZE,",
" String.valueOf(encryptionBlockSize));",
" // save the new encryption properties into service.properties",
" currentCipherFactory.saveProperties(properties) ;",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(",
" TEST_REENCRYPT_CRASH_AFTER_SWITCH_TO_NEWKEY);",
" }",
" // commit the transaction that is used to ",
" // (re) encrypt the database. Note that ",
" // this will be logged with newly generated ",
" // encryption key in the new log file created ",
" // above.",
" transaction.commit();",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(TEST_REENCRYPT_CRASH_AFTER_COMMT);",
" }",
" // force the checkpoint with new encryption key.",
" logFactory.checkpoint(this, dataFactory, xactFactory, true);",
" if (SanityManager.DEBUG) {",
" crashOnDebugFlag(TEST_REENCRYPT_CRASH_AFTER_CHECKPOINT);",
" }",
" // once the checkpont makes it to the log, re-encrption ",
" // is complete. only cleanup is remaining ; update the ",
" // re-encryption status flag to cleanup. ",
" properties.put(RawStoreFactory.DB_ENCRYPTION_STATUS,",
" String.valueOf(",
" // database is (re)encrypted successfuly, ",
" // remove the old version of the container files.",
" dataFactory.removeOldVersionOfContainers(false);",
" if (reEncrypt) ",
" if (externalKeyEncryption)",
" {",
" // remove the saved copy of the verify.key file",
" StorageFile oldVerifyKeyFile = ",
" RawStoreFactory.CRYPTO_OLD_EXTERNAL_KEY_VERIFY_FILE);",
" if (!privDelete(oldVerifyKeyFile))",
" throw StandardException.newException(",
" } else ",
" {",
" // remove the old encryption key property.",
" properties.remove(RawStoreFactory.OLD_ENCRYPTED_KEY);",
" }",
" // (re) encrypion is done, remove the (re) ",
" // encryption status property. ",
" properties.remove(RawStoreFactory.DB_ENCRYPTION_STATUS);",
" } ",
" transaction.close(); "
]
},
{
"added": [
" TEST_REENCRYPT_CRASH_AFTER_RECOVERY_UNDO_LOGFILE_DELETE, ",
" reEncryption);"
],
"header": "@@ -1741,7 +1767,8 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" TEST_REENCRYPT_CRASH_AFTER_RECOVERY_UNDO_LOGFILE_DELETE);"
]
},
{
"added": [
" TEST_REENCRYPT_CRASH_AFTER_RECOVERY_UNDO_REVERTING_KEY, ",
" reEncryption);"
],
"header": "@@ -1825,7 +1852,8 @@ public final class RawStore implements RawStoreFactory, ModuleControl, ModuleSup",
"removed": [
" TEST_REENCRYPT_CRASH_AFTER_RECOVERY_UNDO_REVERTING_KEY);"
]
}
]
},
{
"file": "java/shared/org/apache/derby/shared/common/reference/SQLState.java",
"hunks": [
{
"added": [
" String DATABASE_ENCRYPTION_FAILED = \"XBCXU.S\";",
" String DATABASE_REENCRYPTION_FAILED = \"XBCXV.S\";"
],
"header": "@@ -229,7 +229,8 @@ public interface SQLState {",
"removed": [
""
]
}
]
}
] |
derby-DERBY-1787-439d1e86
|
DERBY-1787
contributed by Mamta Satoor
patch: DERBY1787_UseCorrectTerminologyV1diff.txt
Grant revoke functionality was added in Derby 10.2 The comments that went into the grant revoke code, in some places refer to database owner as "dba". They are not the same thing. In the grant revoke world, dba is a role. We haven't added roles into Derby yet but current use of dba in comments might make it confusing when we do start working on roles including dba.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@448424 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DD_Version.java",
"hunks": [
{
"added": [
"\t * @param\taid\t AuthorizationID of current user to be made Database Owner"
],
"header": "@@ -303,7 +303,7 @@ public\tclass DD_Version implements\tFormatable",
"removed": [
"\t * @param\taid\t\t\t\t\t\tAuthorizationID of current user to be made DBA"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\tprivate String authorizationDatabaseOwner;"
],
"header": "@@ -327,7 +327,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\tprivate String authorizationDBA;"
]
},
{
"added": [
"\t\t\t\tauthorizationDatabaseOwner = IdUtil.getUserAuthorizationId(userName);"
],
"header": "@@ -656,7 +656,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\tauthorizationDBA = IdUtil.getUserAuthorizationId(userName);"
]
},
{
"added": [
"\t\t\t\tauthorizationDatabaseOwner = sd.getAuthorizationId();"
],
"header": "@@ -693,7 +693,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\tauthorizationDBA = sd.getAuthorizationId();"
]
},
{
"added": [
"\t\t\t\tSanityManager.ASSERT((authorizationDatabaseOwner != null), \"Failed to get Database Owner authorization\");"
],
"header": "@@ -705,7 +705,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\t\t\tSanityManager.ASSERT((authorizationDBA != null), \"Failed to get DBA authorization\");"
]
},
{
"added": [
"\t * Get authorizationID of Database Owner",
"\tpublic String getAuthorizationDatabaseOwner()",
"\t\treturn authorizationDatabaseOwner;"
],
"header": "@@ -1177,13 +1177,13 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t * Get authorizationID of DBA",
"\tpublic String getAuthorizationDBA()",
"\t\treturn authorizationDBA;"
]
},
{
"added": [
"\t * @param aid\t\t\t\t\t\t\tAuthorizationID of Database Owner"
],
"header": "@@ -5570,7 +5570,7 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t * @param aid\t\t\t\t\t\t\tAuthorizationID of DBA"
]
},
{
"added": [
" authorizationDatabaseOwner,"
],
"header": "@@ -6434,7 +6434,7 @@ public final class\tDataDictionaryImpl",
"removed": [
" authorizationDBA,"
]
},
{
"added": [
" authorizationDatabaseOwner,"
],
"header": "@@ -8325,7 +8325,7 @@ public final class\tDataDictionaryImpl",
"removed": [
" authorizationDBA,"
]
},
{
"added": [
" authorizationDatabaseOwner,"
],
"header": "@@ -8334,7 +8334,7 @@ public final class\tDataDictionaryImpl",
"removed": [
" \t\t\t\t\t\tauthorizationDBA,"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/DDLConstantAction.java",
"hunks": [
{
"added": [
"\t\t//If the Database Owner is creating this constraint, then no need to ",
"\t\t//collect any privilege dependencies because the Database Owner can ",
"\t\t//access any objects without any restrictions",
"\t\tif (!(lcc.getAuthorizationId().equals(dd.getAuthorizationDatabaseOwner())))"
],
"header": "@@ -247,10 +247,10 @@ public abstract class DDLConstantAction extends GenericConstantAction",
"removed": [
"\t\t//If a dba is creating this constraint, then no need to collect any ",
"\t\t//privilege dependencies because a dba can access any objects without ",
"\t\t//any restrictions",
"\t\tif (!(lcc.getAuthorizationId().equals(dd.getAuthorizationDBA())))"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/PrivilegeInfo.java",
"hunks": [
{
"added": [
"\t * (table, function, or procedure). Note that Database Owner can access"
],
"header": "@@ -52,7 +52,7 @@ public abstract class PrivilegeInfo",
"removed": [
"\t * (table, function, or procedure). Note that DBA can access"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/TablePrivilegeInfo.java",
"hunks": [
{
"added": [
"\t\tif (user.equals(dd.getAuthorizationDatabaseOwner())) return;"
],
"header": "@@ -135,7 +135,7 @@ public class TablePrivilegeInfo extends PrivilegeInfo",
"removed": [
"\t\tif (user.equals(dd.getAuthorizationDBA())) return;"
]
}
]
}
] |
derby-DERBY-1790-4ea76b17
|
Improve JDBC.dropSchema to include dropping synoyms based upon JDBC metadata. Includes a workaround for DERBY-1790.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@439845 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/JDBC.java",
"hunks": [
{
"added": [],
"header": "@@ -154,7 +154,6 @@ public class JDBC {",
"removed": [
"\t * TODO: Drop Synonyms"
]
},
{
"added": [
"",
" // Synonyms - need work around for DERBY-1790 where",
" // passing a table type of SYNONYM fails.",
" rs = dmd.getTables((String) null, schema, (String) null,",
" new String[] {\"AA_DERBY-1790-SYNONYM\"});",
" ",
" dropUsingDMD(s, rs, schema, \"TABLE_NAME\", \"SYNONYM\");",
" "
],
"header": "@@ -182,7 +181,14 @@ public class JDBC {",
"removed": [
"\t\t"
]
},
{
"added": [
" String objectName = rs.getString(mdColumn);",
"\t\t\ts.addBatch(dropLeadIn + JDBC.escape(schema, objectName));"
],
"header": "@@ -217,8 +223,8 @@ public class JDBC {",
"removed": [
"\t\t\tString view = rs.getString(mdColumn);",
"\t\t\ts.addBatch(dropLeadIn + JDBC.escape(schema, view));"
]
}
]
}
] |
derby-DERBY-1793-197f1c28
|
DERBY-1793
Increasing the maximum time to wait for the server to start up from 30 to 60
seconds. The test checks in 500ms increments, so this change does not increase
the time for test to run for those who are already successfully running it.
So far this change has made this test pass in my environment, where previously
it failed consistently. Since this test passes in the nightly full test runs
across a number of environments I assume the network server startup time is
somehow related to my particular machine (processor, memory, disk frag, firewall, vpn, ...)
Others have seen this issue so I am committing to the codeline. I have filed
a separate issue that work should be done to measure the performance of
network server startup as a targeted test (DERBY-1794).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@439041 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1794-197f1c28
|
DERBY-1793
Increasing the maximum time to wait for the server to start up from 30 to 60
seconds. The test checks in 500ms increments, so this change does not increase
the time for test to run for those who are already successfully running it.
So far this change has made this test pass in my environment, where previously
it failed consistently. Since this test passes in the nightly full test runs
across a number of environments I assume the network server startup time is
somehow related to my particular machine (processor, memory, disk frag, firewall, vpn, ...)
Others have seen this issue so I am committing to the codeline. I have filed
a separate issue that work should be done to measure the performance of
network server startup as a targeted test (DERBY-1794).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@439041 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1810-59912979
|
DERBY-1810
Bumping the time to wait for server to start in this test. In my environment
this test is failing consistently (and bad error handling in the test then
causes this test to hang forever). Bumping the timeout so far has made it
pass (I tried it 10 times), where before it failed 10 times.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@439702 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1811-0c7cafc6
|
DERBY-1811 Ensure embedded ResultSet.getTimestamp on a TIME column returns a java.sql.Timestamp with a date portion
equal to the current date at the time the getTimestamp method is called.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@448456 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/types/SQLTime.java",
"hunks": [
{
"added": [
" * Convert a SQL TIME to a JDBC java.sql.Timestamp.",
" * ",
" * Behaviour is to set the date portion of the Timestamp",
" * to the actual current date, which may not match the",
" * SQL CURRENT DATE, which remains fixed for the lifetime",
" * of a SQL statement. JDBC drivers (especially network client drivers)",
" * could not be expected to fetch the CURRENT_DATE SQL value",
" * on every query that involved a TIME value, so the current",
" * date as seen by the JDBC client was picked as the logical behaviour.",
" * See DERBY-1811.",
"\tpublic Timestamp getTimestamp( Calendar cal)",
" {",
" // Calendar initialized to current date and time.",
" cal = new GregorianCalendar(); ",
" }",
" else",
" {",
" cal.clear();",
" // Set Calendar to current date and time.",
" cal.setTime(new Date(System.currentTimeMillis()));",
" }",
"",
" ",
" // Derby's resolution for the TIME type is only seconds.",
" "
],
"header": "@@ -139,30 +139,42 @@ public final class SQLTime extends DataType",
"removed": [
"\t\t@exception StandardException thrown on failure",
"\tpublic Timestamp getTimestamp( Calendar cal) throws StandardException",
" cal = new GregorianCalendar();",
"\t\t\t/*",
"\t\t\t** HACK FOR SYMANTEC: in symantec 1.8, the call",
"\t\t\t** to today.getTime().getTime() will blow up ",
"\t\t\t** in GregorianCalendar because year <= 0.",
"\t\t\t** This is a bug in some sort of optimization that",
"\t\t\t** symantic is doing (not related to the JIT). If ",
"\t\t\t** we do a reference to that field everythings works ",
"\t\t\t** fine, hence this extraneous get(Calendar.YEAR).",
"\t\t\t*/",
"\t\t\tcal.get(Calendar.YEAR);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/types/SQLTimestamp.java",
"hunks": [
{
"added": [
" cal.clear();"
],
"header": "@@ -169,6 +169,7 @@ public final class SQLTimestamp extends DataType",
"removed": []
},
{
"added": [
" cal.clear();",
"\t\tcal.set(Calendar.MILLISECOND, (int)(nanos/1000000));"
],
"header": "@@ -197,13 +198,14 @@ public final class SQLTimestamp extends DataType",
"removed": [
"\t\tcal.set(Calendar.MILLISECOND, (int)(nanos/1E06));"
]
},
{
"added": [
" private Timestamp newTimestamp(Calendar currentCal)"
],
"header": "@@ -889,7 +891,7 @@ public final class SQLTimestamp extends DataType",
"removed": [
" protected Timestamp newTimestamp(Calendar currentCal)"
]
}
]
}
] |
derby-DERBY-1816-26b9e3cc
|
DERBY-1816 (partial): Pre-patch "cleanup" that does the following:
1) Replaces each of the recyclable Date, Time, and Timestamp arguments
with a recyclable java.util.Calendar object in client/am/Cursor.java.
2) Modifies the relevant code in client/am/DateTime.java to call methods
on the recyclable Calendar object instead of on Date, Time, and
Timestamp objects. The benefit to doing this is that we are now using
non-deprecated methods.
Note that even with this patch we are still creating a new instance of
Time/Timestamp/Date for each method--the cleanup patch does not change that.
Instead, the cleanup patch adds the instantiation of a new Calendar object
(one per client/am/Cursor) and then (re-)uses that object to replace the
deprecated calls.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@540740 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/Cursor.java",
"hunks": [
{
"added": [
" java.util.Calendar recyclableCalendar_ = null;"
],
"header": "@@ -95,9 +95,7 @@ public abstract class Cursor {",
"removed": [
" java.sql.Date recyclableDate_ = null;",
" java.sql.Time recyclableTime_ = null;",
" java.sql.Timestamp recyclableTimestamp_ = null;"
]
},
{
"added": [
" getRecyclableCalendar(), "
],
"header": "@@ -511,7 +509,7 @@ public abstract class Cursor {",
"removed": [
" recyclableDate_, "
]
},
{
"added": [
" getRecyclableCalendar(),"
],
"header": "@@ -527,7 +525,7 @@ public abstract class Cursor {",
"removed": [
" recyclableTime_,"
]
},
{
"added": [
" return org.apache.derby.client.am.DateTime.timestampBytesToTimestamp(",
" getRecyclableCalendar(), "
],
"header": "@@ -540,10 +538,10 @@ public abstract class Cursor {",
"removed": [
" return org.apache.derby.client.am.DateTime.timestampBytesToTimestamp(",
" recyclableTimestamp_, "
]
},
{
"added": [
" getRecyclableCalendar(), "
],
"header": "@@ -557,7 +555,7 @@ public abstract class Cursor {",
"removed": [
" recyclableTimestamp_, "
]
},
{
"added": [
" getRecyclableCalendar(),"
],
"header": "@@ -571,7 +569,7 @@ public abstract class Cursor {",
"removed": [
" recyclableTimestamp_,"
]
},
{
"added": [
" getRecyclableCalendar(),"
],
"header": "@@ -585,7 +583,7 @@ public abstract class Cursor {",
"removed": [
" recyclableDate_,"
]
},
{
"added": [
" getRecyclableCalendar(),"
],
"header": "@@ -599,7 +597,7 @@ public abstract class Cursor {",
"removed": [
" recyclableTime_,"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/am/DateTime.java",
"hunks": [
{
"added": [
"import java.util.Calendar;"
],
"header": "@@ -24,6 +24,7 @@ import org.apache.derby.shared.common.i18n.MessageUtil;",
"removed": []
},
{
"added": [
" * @param recyclableCal",
" Calendar recyclableCal, "
],
"header": "@@ -56,14 +57,14 @@ public class DateTime {",
"removed": [
" * @param recyclableDate",
" java.sql.Date recyclableDate, "
]
},
{
"added": [
" (((int) date.charAt(yearIndx + 3)) - zeroBase);",
""
],
"header": "@@ -89,8 +90,8 @@ public class DateTime {",
"removed": [
" (((int) date.charAt(yearIndx + 3)) - zeroBase) -",
" 1900;"
]
},
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.set(year, month, day);",
" return new java.sql.Date(cal.getTimeInMillis());"
],
"header": "@@ -99,14 +100,9 @@ public class DateTime {",
"removed": [
" if (recyclableDate == null) {",
" return new java.sql.Date(year, month, day);",
" } else {",
" recyclableDate.setYear(year);",
" recyclableDate.setMonth(month);",
" recyclableDate.setDate(day);",
" return recyclableDate;",
" }"
]
},
{
"added": [
" * @param recyclableCal",
" Calendar recyclableCal,"
],
"header": "@@ -115,14 +111,14 @@ public class DateTime {",
"removed": [
" * @param recyclableTime",
" java.sql.Time recyclableTime,"
]
},
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.set(1970, Calendar.JANUARY, 1, hour, minute, second);",
" return new java.sql.Time(cal.getTimeInMillis());"
],
"header": "@@ -144,14 +140,9 @@ public class DateTime {",
"removed": [
" if (recyclableTime == null) {",
" return new java.sql.Time(hour, minute, second);",
" } else {",
" recyclableTime.setHours(hour);",
" recyclableTime.setMinutes(minute);",
" recyclableTime.setSeconds(second);",
" return recyclableTime;",
" }"
]
},
{
"added": [
" * @param recyclableCal",
" Calendar recyclableCal, "
],
"header": "@@ -160,14 +151,14 @@ public class DateTime {",
"removed": [
" * @param recyclableTimestamp",
" java.sql.Timestamp recyclableTimestamp, "
]
},
{
"added": [
" (((int) timestamp.charAt(3)) - zeroBase);",
""
],
"header": "@@ -181,8 +172,8 @@ public class DateTime {",
"removed": [
" (((int) timestamp.charAt(3)) - zeroBase) -",
" 1900;"
]
},
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.set(year, month, day, hour, minute, second);",
" java.sql.Timestamp ts = new java.sql.Timestamp(cal.getTimeInMillis());",
" ts.setNanos(fraction * 1000);",
" return ts;"
],
"header": "@@ -207,18 +198,11 @@ public class DateTime {",
"removed": [
" if (recyclableTimestamp == null) {",
" return new java.sql.Timestamp(year, month, day, hour, minute, second, fraction * 1000);",
" } else {",
" recyclableTimestamp.setYear(year);",
" recyclableTimestamp.setMonth(month);",
" recyclableTimestamp.setDate(day);",
" recyclableTimestamp.setHours(hour);",
" recyclableTimestamp.setMinutes(minute);",
" recyclableTimestamp.setSeconds(second);",
" recyclableTimestamp.setNanos(fraction * 1000);",
" return recyclableTimestamp;",
" }"
]
},
{
"added": [
" * @param recyclableCal",
" Calendar recyclableCal,"
],
"header": "@@ -389,14 +373,14 @@ public class DateTime {",
"removed": [
" * @param recyclableTimestamp",
" java.sql.Timestamp recyclableTimestamp,"
]
},
{
"added": [
" (((int) date.charAt(yearIndx + 3)) - zeroBase);",
""
],
"header": "@@ -416,8 +400,8 @@ public class DateTime {",
"removed": [
" (((int) date.charAt(yearIndx + 3)) - zeroBase) -",
" 1900;"
]
},
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.set(year, month, day, 0, 0, 0);",
" java.sql.Timestamp ts = new java.sql.Timestamp(cal.getTimeInMillis());",
" ts.setNanos(0);",
" return ts;"
],
"header": "@@ -426,18 +410,11 @@ public class DateTime {",
"removed": [
" if (recyclableTimestamp == null) {",
" return new java.sql.Timestamp(year, month, day, 0, 0, 0, 0);",
" } else {",
" recyclableTimestamp.setYear(year);",
" recyclableTimestamp.setMonth(month);",
" recyclableTimestamp.setDate(day);",
" recyclableTimestamp.setHours(0);",
" recyclableTimestamp.setMinutes(0);",
" recyclableTimestamp.setSeconds(0);",
" recyclableTimestamp.setNanos(0);",
" return recyclableTimestamp;",
" }"
]
},
{
"added": [
" * @param recyclableCal"
],
"header": "@@ -447,7 +424,7 @@ public class DateTime {",
"removed": [
" * @param recyclableTimestamp"
]
},
{
"added": [
" Calendar recyclableCal, "
],
"header": "@@ -455,7 +432,7 @@ public class DateTime {",
"removed": [
" java.sql.Timestamp recyclableTimestamp, "
]
},
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.setTime(new java.util.Date());",
"",
" // Now override the time fields with the values we parsed.",
" cal.set(Calendar.HOUR_OF_DAY, hour);",
" cal.set(Calendar.MINUTE, minute);",
" cal.set(Calendar.SECOND, second);",
"",
" // Derby's resolution for the TIME type is only seconds.",
" cal.set(Calendar.MILLISECOND, 0);",
" return new java.sql.Timestamp(cal.getTimeInMillis());"
],
"header": "@@ -480,18 +457,17 @@ public class DateTime {",
"removed": [
" java.util.Date today = new java.util.Date();",
" if (recyclableTimestamp == null) {",
" recyclableTimestamp = new java.sql.Timestamp(today.getTime());",
" }",
" else {",
" recyclableTimestamp.setTime(today.getTime());",
" }",
" recyclableTimestamp.setHours(hour);",
" recyclableTimestamp.setMinutes(minute);",
" recyclableTimestamp.setSeconds(second);",
" recyclableTimestamp.setNanos(0);",
" return recyclableTimestamp;"
]
},
{
"added": [
" * @param recyclableCal",
" Calendar recyclableCal, "
],
"header": "@@ -501,14 +477,14 @@ public class DateTime {",
"removed": [
" * @param recyclableDate",
" java.sql.Date recyclableDate, "
]
},
{
"added": [
" (((int) timestamp.charAt(3)) - zeroBase);",
""
],
"header": "@@ -522,8 +498,8 @@ public class DateTime {",
"removed": [
" (((int) timestamp.charAt(3)) - zeroBase) -",
" 1900;"
]
},
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.set(year, month, day);",
" return new java.sql.Date(cal.getTimeInMillis());"
],
"header": "@@ -532,14 +508,9 @@ public class DateTime {",
"removed": [
" if (recyclableDate == null) {",
" return new java.sql.Date(year, month, day);",
" } else {",
" recyclableDate.setYear(year);",
" recyclableDate.setMonth(month);",
" recyclableDate.setDate(day);",
" return recyclableDate;",
" }"
]
},
{
"added": [
" * @param recyclableCal",
" Calendar recyclableCal, "
],
"header": "@@ -549,14 +520,14 @@ public class DateTime {",
"removed": [
" * @param recyclableTime",
" java.sql.Time recyclableTime, "
]
},
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.set(1970, Calendar.JANUARY, 1, hour, minute, second);",
" return new java.sql.Time(cal.getTimeInMillis());",
" }",
"",
" /**",
" * Return a clean (i.e. all values cleared out) Calendar object",
" * that can be used for creating Time, Timestamp, and Date objects.",
" * If the received Calendar object is non-null, then just clear",
" * that and return it.",
" *",
" * @param recyclableCal Calendar object to use if non-null.",
" */",
" private static Calendar getCleanCalendar(Calendar recyclableCal)",
" {",
" if (recyclableCal != null)",
" {",
" recyclableCal.clear();",
" return recyclableCal;",
"",
" /* Default GregorianCalendar initializes to current time.",
" * Make sure we clear that out before returning, per the",
" * contract of this method.",
" */",
" Calendar result = new java.util.GregorianCalendar();",
" result.clear();",
" return result;"
],
"header": "@@ -575,14 +546,34 @@ public class DateTime {",
"removed": [
" if (recyclableTime == null) {",
" return new java.sql.Time(hour, minute, second);",
" } else {",
" recyclableTime.setYear(hour);",
" recyclableTime.setMonth(minute);",
" recyclableTime.setDate(second);",
" return recyclableTime;"
]
}
]
}
] |
derby-DERBY-1816-33a27994
|
DERBY-1816: ResultSet.getTime() on a SQL TIMESTAMP should retain millisecond
precision. Patch does the following:
1. Separates the timestamp parse logic in client/am/DateTime.java into a new
method called "parseTimestampString()". The new method takes a timestamp
string and a Calendar object, and sets the fields of the Calendar based on
the fields that are parsed from the timestamp string. The method also
returns the parsed microseconds value since that cannot be set on a
Calendar object (the precision of a Calendar is milliseconds).
2. Modifies timestampBytesToTimestamp(...) to call the new method for
parsing timestamps.
3. Changes the timestampBytesToTime(...) method so that it now parses the
full timestamp (via the new parseTimestampString() method) instead of
just parsing the hours, minutes, and seconds. Then a java.sql.Time
object is created from the Calendar object into which the timestamp
string was parsed. This allows us to preserve the sub-second resolution
that is parsed from the timestamp.
4. Re-enables the relevant test case in lang/TimeHandlingTest.java so that
it now runs in client mode.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@541333 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/DateTime.java",
"hunks": [
{
"added": [
" Calendar cal = getCleanCalendar(recyclableCal);",
"",
" /* java.sql.Timestamp has nanosecond precision, so we have to keep",
" * the parsed microseconds value and use that to set nanos.",
" */",
" int micros = parseTimestampString(timestamp, cal);",
" java.sql.Timestamp ts = new java.sql.Timestamp(cal.getTimeInMillis());",
" ts.setNanos(micros * 1000);",
" return ts;",
" }",
"",
" /**",
" * Parse a String of the form <code>yyyy-mm-dd-hh.mm.ss.ffffff</code>",
" * and store the various fields into the received Calendar object.",
" *",
" * @param timestamp Timestamp value to parse, as a String.",
" * @param cal Calendar into which to store the parsed fields. Should",
" * not be null.",
" *",
" * @return The microseconds field as parsed from the timestamp string.",
" * This cannot be set in the Calendar object but we still want to",
" * preserve the value, in case the caller needs it (for example, to",
" * create a java.sql.Timestamp with microsecond precision).",
" */",
" private static int parseTimestampString(String timestamp,",
" Calendar cal)",
" {",
" cal.set(Calendar.YEAR,",
" (((int) timestamp.charAt(3)) - zeroBase));",
" cal.set(Calendar.MONTH,",
" (((int) timestamp.charAt(6)) - zeroBase) - 1);",
"",
" cal.set(Calendar.DAY_OF_MONTH,",
" (((int) timestamp.charAt(9)) - zeroBase));",
"",
" cal.set(Calendar.HOUR,",
" (((int) timestamp.charAt(12)) - zeroBase));",
"",
" cal.set(Calendar.MINUTE,",
" (((int) timestamp.charAt(15)) - zeroBase));",
"",
" cal.set(Calendar.SECOND,",
" (((int) timestamp.charAt(18)) - zeroBase));",
"",
" int micros = "
],
"header": "@@ -166,31 +166,62 @@ public class DateTime {",
"removed": [
" year =",
" (((int) timestamp.charAt(3)) - zeroBase);",
" month =",
" (((int) timestamp.charAt(6)) - zeroBase) -",
" 1;",
" day =",
" (((int) timestamp.charAt(9)) - zeroBase);",
" hour =",
" (((int) timestamp.charAt(12)) - zeroBase);",
" minute =",
" (((int) timestamp.charAt(15)) - zeroBase);",
" second =",
" (((int) timestamp.charAt(18)) - zeroBase);",
" fraction ="
]
},
{
"added": [
" /* The \"ffffff\" that we parsed is microseconds. In order to",
" * capture that information inside of the MILLISECOND field",
" * we have to divide by 1000.",
" */",
" cal.set(Calendar.MILLISECOND, micros / 1000);",
" return micros;"
],
"header": "@@ -198,11 +229,12 @@ public class DateTime {",
"removed": [
" Calendar cal = getCleanCalendar(recyclableCal);",
" cal.set(year, month, day, hour, minute, second);",
" java.sql.Timestamp ts = new java.sql.Timestamp(cal.getTimeInMillis());",
" ts.setNanos(fraction * 1000);",
" return ts;"
]
}
]
}
] |
derby-DERBY-1817-86cae7bf
|
DERBY-1817: Race condition in network server's thread pool
Instead of always putting new sessions in the run queue when there are
free threads, the network server now compares the number of free
threads and the size of the run queue. This is done to prevent the run
queue from growing to a size greater than the number of free
threads. Also, the server now synchronizes on runQueue until the
session has been added to the queue. This is to prevent two threads
from deciding that there are enough free threads and adding the
session to the run queue, when there in fact only were enough free
threads for one of them. With this patch, I am not able to reproduce
DERBY-1757 on platforms where the failure was easily reproduced
before.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@441802 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/ClientThread.java",
"hunks": [
{
"added": [],
"header": "@@ -47,7 +47,6 @@ final class ClientThread extends Thread {",
"removed": [
"\t\t\tSession clientSession = null;"
]
}
]
},
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"\t * Add a session - for use by <code>ClientThread</code>. Put the session",
"\t * into the session table and the run queue. Start a new",
"\t * <code>DRDAConnThread</code> if there are more sessions waiting than",
"\t * there are free threads, and the maximum number of threads is not",
"\t * exceeded.",
"\t *",
"\t * @param connectionNumber number of connection",
"\t * @param clientSocket the socket to read from and write to",
"\t */",
"\tvoid addSession(int connectionNumber, Socket clientSocket)",
"\t\t\tthrows IOException {",
"",
"\t\t// Note that we always re-fetch the tracing configuration because it",
"\t\t// may have changed (there are administrative commands which allow",
"\t\t// dynamic tracing reconfiguration).",
"\t\tSession session = new Session(connectionNumber, clientSocket,",
"\t\t\t\t\t\t\t\t\t getTraceDirectory(), getTraceAll());",
"",
"\t\tsessionTable.put(new Integer(connectionNumber), session);",
"",
"\t\t// Synchronize on runQueue to prevent other threads from updating",
"\t\t// runQueue or freeThreads. Hold the monitor's lock until a thread is",
"\t\t// started or the session is added to the queue. If we release the lock",
"\t\t// earlier, we might start too few threads (DERBY-1817).",
"\t\tsynchronized (runQueue) {",
"\t\t\tDRDAConnThread thread = null;",
"",
"\t\t\t// try to start a new thread if we don't have enough free threads",
"\t\t\t// to service all sessions in the run queue",
"\t\t\tif (freeThreads <= runQueue.size()) {",
"\t\t\t\t// Synchronize on threadsSync to ensure that the value of",
"\t\t\t\t// maxThreads doesn't change until the new thread is added to",
"\t\t\t\t// threadList.",
"\t\t\t\tsynchronized (threadsSync) {",
"\t\t\t\t\t// only start a new thread if we have no maximum number of",
"\t\t\t\t\t// threads or the maximum number of threads is not exceeded",
"\t\t\t\t\tif ((maxThreads == 0) ||",
"\t\t\t\t\t\t\t(threadList.size() < maxThreads)) {",
"\t\t\t\t\t\tthread = new DRDAConnThread(session, this,",
"\t\t\t\t\t\t\t\t\t\t\t\t\tgetTimeSlice(),",
"\t\t\t\t\t\t\t\t\t\t\t\t\tgetLogConnections());",
"\t\t\t\t\t\tthreadList.add(thread);",
"\t\t\t\t\t\tthread.start();",
"\t\t\t\t\t}",
"\t\t\t\t}",
"\t\t\t}",
"",
"\t\t\t// add the session to the run queue if we didn't start a new thread",
"\t\t\tif (thread == null) {",
"\t\t\t\trunQueueAdd(session);",
"\t\t\t}",
"\t\t}"
],
"header": "@@ -3372,14 +3372,58 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t * Add To Session Table - for use by ClientThread, add a new Session to the sessionTable.",
"\t *",
"\t * @param i\tConnection number to register",
"\t * @param s\tSession to add to the sessionTable",
"\t */",
"\tprotected void addToSessionTable(Integer i, Session s)",
"\t{",
"\t\tsessionTable.put(i, s);"
]
}
]
}
] |
derby-DERBY-1817-aeb14c38
|
DERBY-1817: Race condition in network server's thread pool
Clean-up of NetworkServerControlImpl:
- moves generation of connection number into addSession()
- adds new method removeThread() which can be used instead of
getThreadList().remove()
- removes methods that are no longer used
- makes methods that are only used by NetworkServerControlImpl private
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@442463 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"\tprivate void runQueueAdd(Session clientSession)"
],
"header": "@@ -1856,7 +1856,7 @@ public final class NetworkServerControlImpl {",
"removed": [
"\tprotected void runQueueAdd(Session clientSession)"
]
},
{
"added": [
"\tprivate int getMaxThreads()"
],
"header": "@@ -3094,7 +3094,7 @@ public final class NetworkServerControlImpl {",
"removed": [
"\tprotected int getMaxThreads()"
]
},
{
"added": [
"\t * <p><code>addSession()</code> should only be called from one thread at a",
"\t * time.",
"\t *",
"\tvoid addSession(Socket clientSocket) throws Exception {",
"",
"\t\tint connectionNumber = ++connNum;",
"",
"\t\tif (getLogConnections()) {",
"\t\t\tconsolePropertyMessage(\"DRDA_ConnNumber.I\",",
"\t\t\t\t\t\t\t\t Integer.toString(connectionNumber));",
"\t\t}"
],
"header": "@@ -3378,11 +3378,19 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t * @param connectionNumber number of connection",
"\tvoid addSession(int connectionNumber, Socket clientSocket)",
"\t\t\tthrows IOException {"
]
},
{
"added": [
"\t * Remove a thread from the thread list. Should be called when a",
"\t * <code>DRDAConnThread</code> has been closed.",
"\t * @param thread the closed thread",
"\tvoid removeThread(DRDAConnThread thread) {",
"\t\tthreadList.remove(thread);"
],
"header": "@@ -3430,41 +3438,13 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t * Get New Conn Num - for use by ClientThread, generate a new connection number for the attempted Session.",
"\t *",
"\t * @return\ta new connection number",
"\t */",
"\tprotected int getNewConnNum()",
"\t{",
"\t\treturn ++connNum;",
"\t}",
"",
"",
"\t/**",
"\t * Get Free Threads - for use by ClientThread, get the number of ",
"\t * free threads in order to determine if",
"\t * a new thread can be run.",
"\t * @return\tthe number of free threads",
"\tprotected int getFreeThreads()",
"\t{",
"\t\tsynchronized(runQueue)",
"\t\t{",
"\t\t\treturn freeThreads;",
"\t\t}",
"\t}",
"",
"\t/**",
"\t * Get Thread List - for use by ClientThread, get the thread list ",
"\t * Vector so that a newly spawned thread",
"\t * can be run and added to the ThreadList from the ClientThread ",
"\t *",
"\t * @return\tthe threadList Vector",
"\t */",
"\tprotected Vector getThreadList()",
"\t{",
"\t\treturn threadList;"
]
}
]
}
] |
derby-DERBY-1817-cea6c946
|
DERBY-1817: Race condition in network server's thread pool
Reduce the amount of code synchronized on runQueue in
NetworkServerControlImpl.addSession().
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@442462 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/drda/org/apache/derby/impl/drda/NetworkServerControlImpl.java",
"hunks": [
{
"added": [
"\t\t// Check whether there are enough free threads to service all the",
"\t\t// threads in the run queue in addition to the newly added session.",
"\t\tboolean enoughThreads;",
"\t\t\tenoughThreads = (runQueue.size() < freeThreads);",
"\t\t}",
"\t\t// No need to hold the synchronization on runQueue any longer than",
"\t\t// this. Since no other threads can make runQueue grow, and no other",
"\t\t// threads will reduce the number of free threads without removing",
"\t\t// sessions from runQueue, (runQueue.size() < freeThreads) cannot go",
"\t\t// from true to false until addSession() returns.",
"",
"\t\tDRDAConnThread thread = null;",
"",
"\t\t// try to start a new thread if we don't have enough free threads",
"\t\tif (!enoughThreads) {",
"\t\t\t// Synchronize on threadsSync to ensure that the value of",
"\t\t\t// maxThreads doesn't change until the new thread is added to",
"\t\t\t// threadList.",
"\t\t\tsynchronized (threadsSync) {",
"\t\t\t\t// only start a new thread if we have no maximum number of",
"\t\t\t\t// threads or the maximum number of threads is not exceeded",
"\t\t\t\tif ((maxThreads == 0) || (threadList.size() < maxThreads)) {",
"\t\t\t\t\tthread = new DRDAConnThread(session, this, getTimeSlice(),",
"\t\t\t\t\t\t\t\t\t\t\t\tgetLogConnections());",
"\t\t\t\t\tthreadList.add(thread);",
"\t\t\t\t\tthread.start();",
"\t\t}",
"\t\t// add the session to the run queue if we didn't start a new thread",
"\t\tif (thread == null) {",
"\t\t\trunQueueAdd(session);"
],
"header": "@@ -3392,37 +3392,40 @@ public final class NetworkServerControlImpl {",
"removed": [
"\t\t// Synchronize on runQueue to prevent other threads from updating",
"\t\t// runQueue or freeThreads. Hold the monitor's lock until a thread is",
"\t\t// started or the session is added to the queue. If we release the lock",
"\t\t// earlier, we might start too few threads (DERBY-1817).",
"\t\t\tDRDAConnThread thread = null;",
"",
"\t\t\t// try to start a new thread if we don't have enough free threads",
"\t\t\t// to service all sessions in the run queue",
"\t\t\tif (freeThreads <= runQueue.size()) {",
"\t\t\t\t// Synchronize on threadsSync to ensure that the value of",
"\t\t\t\t// maxThreads doesn't change until the new thread is added to",
"\t\t\t\t// threadList.",
"\t\t\t\tsynchronized (threadsSync) {",
"\t\t\t\t\t// only start a new thread if we have no maximum number of",
"\t\t\t\t\t// threads or the maximum number of threads is not exceeded",
"\t\t\t\t\tif ((maxThreads == 0) ||",
"\t\t\t\t\t\t\t(threadList.size() < maxThreads)) {",
"\t\t\t\t\t\tthread = new DRDAConnThread(session, this,",
"\t\t\t\t\t\t\t\t\t\t\t\t\tgetTimeSlice(),",
"\t\t\t\t\t\t\t\t\t\t\t\t\tgetLogConnections());",
"\t\t\t\t\t\tthreadList.add(thread);",
"\t\t\t\t\t\tthread.start();",
"\t\t\t\t\t}",
"\t\t\t// add the session to the run queue if we didn't start a new thread",
"\t\t\tif (thread == null) {",
"\t\t\t\trunQueueAdd(session);",
"\t\t\t}"
]
}
]
}
] |
derby-DERBY-1824-22043364
|
DERBY-1824: Permission/privlege names in exceptions should be in upper case as keywords, not lower case.
Patch contributed by Jazarine Jamal
Patch file: DERBY1824.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@629024 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/StatementTablePermission.java",
"hunks": [
{
"added": [
"\t\t\treturn \"SELECT\";",
"\t\t\treturn \"UPDATE\";",
"\t\t\treturn \"REFERENCES\";",
"\t\t\treturn \"INSERT\";",
"\t\t\treturn \"DELETE\";",
"\t\t\treturn \"TRIGGER\";"
],
"header": "@@ -199,17 +199,17 @@ public class StatementTablePermission extends StatementPermission",
"removed": [
"\t\t\treturn \"select\";",
"\t\t\treturn \"update\";",
"\t\t\treturn \"references\";",
"\t\t\treturn \"insert\";",
"\t\t\treturn \"delete\";",
"\t\t\treturn \"trigger\";"
]
}
]
}
] |
derby-DERBY-1826-a997e8f1
|
DERBY-1826: Add JUnit utility methods for database/server shutdown
Patch contributed by Deepa Remesh.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@448900 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/TestConfiguration.java",
"hunks": [
{
"added": [
" return getDefaultConnection(\"create=true\");"
],
"header": "@@ -408,7 +408,7 @@ public class TestConfiguration {",
"removed": [
" return openConnection(getDatabaseName());"
]
},
{
"added": [
" return getConnection(databaseName, \"create=true\");",
" }",
" ",
" /**",
" * Get a connection to the default database using the specified connection",
" * attributes.",
" * ",
" * @param connAttrs connection attributes",
" * @return connection to database.",
" * @throws SQLException",
" */",
" public Connection getDefaultConnection(String connAttrs)",
" throws SQLException {",
" return getConnection(getDatabaseName(), connAttrs);",
" }",
" ",
" /**",
" * Get a connection to a database using the specified connection ",
" * attributes.",
" * ",
" * @param databaseName database to connect to",
" * @param connAttrs connection attributes",
" * @return connection to database.",
" * @throws SQLException",
" */",
" public Connection getConnection (String databaseName, String connAttrs) ",
" \tthrows SQLException {",
" getJDBCUrl(databaseName) + \";\" + connAttrs,",
" \tgetDataSourcePropertiesForDatabase(databaseName, connAttrs);",
" Properties attrs = getDataSourcePropertiesForDatabase(databaseName, connAttrs);"
],
"header": "@@ -421,26 +421,53 @@ public class TestConfiguration {",
"removed": [
" getJDBCUrl(databaseName) + \";create=true\",",
" getDataSourcePropertiesForDatabase(databaseName);",
" Properties attrs = getDataSourcePropertiesForDatabase(databaseName);"
]
},
{
"added": [
" * database. If the database does not exist, it will be created."
],
"header": "@@ -564,7 +591,7 @@ public class TestConfiguration {",
"removed": [
" * database."
]
},
{
"added": [
" getCurrent().getDatabaseName(), \"create=true\");",
" * Generate properties which can be set on a <code>DataSource</code> ",
" * in order to connect to a database using the specified connection ",
" * attributes.",
" * ",
" * @param connAttrs connection attributes",
" * @return",
" \t(String databaseName, String connAttrs) "
],
"header": "@@ -572,22 +599,20 @@ public class TestConfiguration {",
"removed": [
" getCurrent().getDatabaseName());",
" * Generate properties which can be set on a",
" * <code>DataSource</code> in order to connect to a given",
" * database.",
" *",
" *",
" * @return a <code>Properties</code> object containing server",
" * name, port number, database name and other attributes needed to",
" * connect to the database",
" (String databaseName) "
]
},
{
"added": [
" attrs.setProperty(\"connectionAttributes\", connAttrs);"
],
"header": "@@ -595,7 +620,7 @@ public class TestConfiguration {",
"removed": [
" attrs.setProperty(\"connectionAttributes\", \"create=true\");"
]
}
]
}
] |
derby-DERBY-1826-f05a2f83
|
DERBY-1522
contributed by Deepa Remesh, [email protected]
Attaching a patch 'derby1522_v2.diff. It includes a JUnit test for testing the
switch to SQL standard authorization. It tests following:
1. grant/revoke is not available if derby.database.sqlAuthorization property
is not set.
2. grant/revoke is available when derby.database.sqlAuthorization is set to true.
3. Once derby.database.sqlAuthorization is set to true, it cannot be set to any other value.
This patch also modifies DatabasePropertyTestSetup.tearDown method. The tearDown method resets the property values to old values. It will now ignore exceptions when property reset is not supported. I am including this small change in the above patch. (I had opened DERBY-1827 for the issue with tearDown method. ). I am using TestUtil.getConnection method to shutdown the database. I have opened DERBY-1826 to add methods to Derby's JUnit classes for shutdown.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@441584 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/DatabasePropertyTestSetup.java",
"hunks": [
{
"added": [
"import org.apache.derbyTesting.functionTests.util.SQLStateConstants;",
""
],
"header": "@@ -29,6 +29,8 @@ import java.util.Properties;",
"removed": []
},
{
"added": [
" \t// that will not be reset by the old set. Ignore any ",
" // invalid property values.",
" try {",
" \tfor (Enumeration e = newValues.propertyNames(); e.hasMoreElements();)",
" \t{",
" \t\tString key = (String) e.nextElement();",
" \t\tif (oldValues.getProperty(key) == null)",
" \t\t{",
" \t\t\tsetDBP.setString(1, key);",
" \t\t\tsetDBP.executeUpdate();",
" \t\t}",
" \t}",
" } catch (SQLException sqle) {",
" \tif(!sqle.getSQLState().equals(SQLStateConstants.PROPERTY_UNSUPPORTED_CHANGE))",
" \t\tthrow sqle;"
],
"header": "@@ -75,15 +77,21 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": [
" \t// that will not be reset by the old set.",
" \tfor (Enumeration e = newValues.propertyNames(); e.hasMoreElements();)",
" \t{",
" \t\tString key = (String) e.nextElement();",
" \t\tif (oldValues.getProperty(key) == null)",
" {",
" setDBP.setString(1, key);",
" setDBP.executeUpdate();",
" }"
]
}
]
}
] |
derby-DERBY-1827-f05a2f83
|
DERBY-1522
contributed by Deepa Remesh, [email protected]
Attaching a patch 'derby1522_v2.diff. It includes a JUnit test for testing the
switch to SQL standard authorization. It tests following:
1. grant/revoke is not available if derby.database.sqlAuthorization property
is not set.
2. grant/revoke is available when derby.database.sqlAuthorization is set to true.
3. Once derby.database.sqlAuthorization is set to true, it cannot be set to any other value.
This patch also modifies DatabasePropertyTestSetup.tearDown method. The tearDown method resets the property values to old values. It will now ignore exceptions when property reset is not supported. I am including this small change in the above patch. (I had opened DERBY-1827 for the issue with tearDown method. ). I am using TestUtil.getConnection method to shutdown the database. I have opened DERBY-1826 to add methods to Derby's JUnit classes for shutdown.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@441584 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/junit/DatabasePropertyTestSetup.java",
"hunks": [
{
"added": [
"import org.apache.derbyTesting.functionTests.util.SQLStateConstants;",
""
],
"header": "@@ -29,6 +29,8 @@ import java.util.Properties;",
"removed": []
},
{
"added": [
" \t// that will not be reset by the old set. Ignore any ",
" // invalid property values.",
" try {",
" \tfor (Enumeration e = newValues.propertyNames(); e.hasMoreElements();)",
" \t{",
" \t\tString key = (String) e.nextElement();",
" \t\tif (oldValues.getProperty(key) == null)",
" \t\t{",
" \t\t\tsetDBP.setString(1, key);",
" \t\t\tsetDBP.executeUpdate();",
" \t\t}",
" \t}",
" } catch (SQLException sqle) {",
" \tif(!sqle.getSQLState().equals(SQLStateConstants.PROPERTY_UNSUPPORTED_CHANGE))",
" \t\tthrow sqle;"
],
"header": "@@ -75,15 +77,21 @@ public class DatabasePropertyTestSetup extends BaseJDBCTestSetup {",
"removed": [
" \t// that will not be reset by the old set.",
" \tfor (Enumeration e = newValues.propertyNames(); e.hasMoreElements();)",
" \t{",
" \t\tString key = (String) e.nextElement();",
" \t\tif (oldValues.getProperty(key) == null)",
" {",
" setDBP.setString(1, key);",
" setDBP.executeUpdate();",
" }"
]
}
]
}
] |
derby-DERBY-183-37a2f5e8
|
DERBY-183: Allow unnamed parameters in CREATE FUNCTION
This patch was contributed by James F. Adams ([email protected])
The patch does the following:
1) Modifies java/engine/org/apache/derby/impl/sql/compile/sqlgrammar.jj
a) Initializes parameterName to "" in procedureParameterDefinition
and functionParameterDefinition
b) Makes parameterName optional in procedureParameterDefinition
and functionParameterDefinition
2) Modifies java/engine/org/apache/derby/impl/sql/compile/CreateAliasNode.java
to ignore function and procedure parameter names equal to "" when
checking for duplicate parameter names.
Tests have been added to lang/functions.sql and lang/procedure.java.
The parameter name is made optional by surrounding its production with [].
This changes the grammar from:
parameterName = identifier(Limits.MAX_IDENTIFIER_LENGTH, true)
typeDescriptor = dataTypeDDL()
to:
[ parameterName = identifier(Limits.MAX_IDENTIFIER_LENGTH, true) ]
typeDescriptor = dataTypeDDL()
This results in a choice conflict because certain tokens satisfy both
identifier() and dataTypeDDL(). An additional token of lookahead resolves
this conflict. This results in:
[ LOOKAHEAD(2) parameterName = identifier(Limits.MAX_IDENTIFIER_LENGTH, true) ]
typeDescriptor = dataTypeDDL()
Expressing this in an alternate form such as:
(
parameterName = identifier(Limits.MAX_IDENTIFIER_LENGTH, true)
typeDescriptor = dataTypeDDL()
) | typeDescriptor = dataTypeDDL()
still results in a choice conflict so I opted for the more compact form.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@463982 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/catalog/types/RoutineAliasInfo.java",
"hunks": [
{
"added": [
" * Describe a routine (procedure or function) alias."
],
"header": "@@ -34,7 +34,7 @@ import java.io.ObjectOutput;",
"removed": [
" * Describe a r (procedure or function) alias."
]
},
{
"added": [
" /**",
" * Name of each parameter. As of DERBY 10.3, parameter names",
" * are optional. If the parameter is unnamed, parameterNames[i]",
" * is a string of length 0",
" */"
],
"header": "@@ -55,6 +55,11 @@ public class RoutineAliasInfo extends MethodAliasInfo",
"removed": []
}
]
}
] |
derby-DERBY-1830-01b5d0b3
|
DERBY-1830
missed new file VTITest.java in last commit.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@449116 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1847-41fbd581
|
DERBY-1909: ALTER TABLE DROP COLUMN needs to update GRANTed privileges
When ALTER TABLE DROP COLUMN is used to drop a column from a table, it needs to update the GRANTed column privileges on that table.
The core of this proposed patch involves refactoring and reusing the
DERBY-1847 method which knows how to rewrite SYSCOLPERMS rows
to update the COLUMNS column. The DERBY-1847 version of that code
only handled the case of adding a bit to the COLUMNS column; this patch
extends that method to support removing a bit from the COLUMNS
column as well, then calls the method from the AlterTable execution logic.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@503550 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t{",
"\t\trewriteSYSCOLPERMSforAlterTable(tableID, tc, null);",
"\t}",
"\t/**",
"\t * Update SYSCOLPERMS due to dropping a column from a table.",
"\t *",
"\t * Since ALTER TABLE .. DROP COLUMN .. has removed a column from the",
"\t * table, we need to shrink COLUMNS by removing the corresponding bit",
"\t * position, and shifting all the subsequent bits \"left\" one position.",
"\t *",
"\t * @param tableID\tThe UUID of the table from which a col has been dropped",
"\t * @param tc\t\tTransactionController for the transaction",
"\t * @param columnDescriptor Information about the dropped column",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tpublic void updateSYSCOLPERMSforDropColumn(UUID tableID, ",
"\t\t\tTransactionController tc, ColumnDescriptor columnDescriptor)",
"\t\tthrows StandardException",
"\t{",
"\t\trewriteSYSCOLPERMSforAlterTable(tableID, tc, columnDescriptor);",
"\t}",
"\t/**",
"\t * Workhorse for ALTER TABLE-driven mods to SYSCOLPERMS",
"\t *",
"\t * This method finds all the SYSCOLPERMS rows for this table. Then it",
"\t * iterates through each row, either adding a new column to the end of",
"\t * the table, or dropping a column from the table, as appropriate. It",
"\t * updates each SYSCOLPERMS row to store the new COLUMNS value.",
"\t *",
"\t * @param tableID\tThe UUID of the table being altered",
"\t * @param tc\t\tTransactionController for the transaction",
"\t * @param columnDescriptor Dropped column info, or null if adding",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tprivate void rewriteSYSCOLPERMSforAlterTable(UUID tableID,",
"\t\t\tTransactionController tc, ColumnDescriptor columnDescriptor)",
"\t\tthrows StandardException"
],
"header": "@@ -2363,6 +2363,45 @@ public final class\tDataDictionaryImpl",
"removed": []
},
{
"added": [
"\t\tin SYSCOLPERMS and adjust the \"COLUMNS\" column in SYSCOLPERMS to ",
"\t\taccomodate the added or dropped column in the tableid*/"
],
"header": "@@ -2395,8 +2434,8 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tin SYSCOLPERMS and expand the \"COLUMNS\" column in SYSCOLPERMS to ",
"\t\taccomodate the newly added column to the tableid*/"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [],
"header": "@@ -673,17 +673,6 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t * Currently, column privileges are not repaired when",
"\t * dropping a column. This is bug DERBY-1909, and for the",
"\t * time being we simply reject DROP COLUMN if it is specified",
"\t * when sqlAuthorization is true (that check occurs in the",
"\t * parser, not here). When DERBY-1909 is fixed:",
"\t * - Update this comment",
"\t * - Remove the check in dropColumnDefinition() in the parser",
"\t * - consolidate all the tests in altertableDropColumn.sql",
"\t * back into altertable.sql and remove the separate",
"\t * altertableDropColumn files",
"\t * "
]
}
]
}
] |
derby-DERBY-1847-626d3159
|
DERBY-1847
contributed by Mamta Satoor
patch: DERBY1846_V1_diff_AddColumnAndGrantRevoke.txt
To recap the problem, in SQL Authorization mode, when a new column is added to a table, the rows in SYSCOLPERMS for the table in question were not getting updated to incorporate the new column. This caused ASSERT failure when a non-table owner attempted to select the new column.
Some background information on system table involved: SYSCOLPERMS keeps track of column level privileges on a given table. One of the columns in SYSCOLPERMS is "COLUMNS" and it has a bit map to show which columns have the given permission granted on them. When a new column is added to the user table, the "COLUMNS" need to be expanded by one bit and that bit should be initialized to zero since no privileges have been granted on that column at the ALTER TABLE...ADD COLUMN time.
I have fixed this problem by having AlterTableConstantAction.addNewColumnToTable call the new method in DataDictionary called updateSYSCOLPERMSforAddColumnToUserTable. At this point, we know of only the TableDescriptor's uuid which can help us determine all the rows in SYSCOLPERMS for that given table uuid. I get ColPermsDescriptor for each one of those rows and then use the ColPermsDescriptor's uuid to update the "COLUMNS" column so SYSCOLPERMS is aware of the newly added column in user table. This fixes the problem because at the time of SELECT, when we do privilege lookup in SYSCOLPERMS, we have info on the newly added column.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@453352 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t/**",
"\t * Need to update SYSCOLPERMS for a given table because a new column has ",
"\t * been added to that table. SYSCOLPERMS has a column called \"COLUMNS\"",
"\t * which is a bit map for all the columns in a given user table. Since",
"\t * ALTER TABLE .. ADD COLUMN .. has added one more column, we need to",
"\t * expand \"COLUMNS\" for that new column",
"\t *",
"\t * Currently, this code gets called during execution phase of",
"\t * ALTER TABLE .. ADD COLUMN .. ",
"\t *",
"\t * @param tableID\tThe UUID of the table to which a column has been added",
"\t * @param tc\t\tTransactionController for the transaction",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tpublic void\tupdateSYSCOLPERMSforAddColumnToUserTable(UUID tableID, TransactionController tc)",
"\tthrows StandardException",
"\t{",
"\t\t// In Derby authorization mode, permission catalogs may not be present",
"\t\tif (!usesSqlAuthorization)",
"\t\t\treturn;",
"",
"\t\t/* This method has 2 steps to it. First get all the ColPermsDescriptor ",
"\t\tfor given tableid. And next step is to go back to SYSCOLPERMS to find",
"\t\tunique row corresponding to each of ColPermsDescriptor and update the",
"\t\t\"COLUMNS\" column in SYSCOLPERMS. The reason for this 2 step process is",
"\t\tthat SYSCOLPERMS has a non-unique row on \"TABLEID\" column and hence ",
"\t\twe can't get a unique handle on each of the affected row in SYSCOLPERMS",
"\t\tusing just the \"TABLEID\" column */",
"",
"\t\t// First get all the ColPermsDescriptor for the given tableid from ",
"\t\t//SYSCOLPERMS using getDescriptorViaIndex(). ",
"\t\tList permissionDescriptorsList;//all ColPermsDescriptor for given tableid",
"\t\tDataValueDescriptor\t\ttableIDOrderable = getValueAsDVD(tableID);",
"\t\tTabInfoImpl\tti = getNonCoreTI(SYSCOLPERMS_CATALOG_NUM);",
"\t\tSYSCOLPERMSRowFactory rf = (SYSCOLPERMSRowFactory) ti.getCatalogRowFactory();",
"\t\tExecIndexRow keyRow = exFactory.getIndexableRow(1);",
"\t\tkeyRow.setColumn(1, tableIDOrderable);",
"\t\tpermissionDescriptorsList = newSList();",
"\t\tgetDescriptorViaIndex(",
"\t\t\tSYSCOLPERMSRowFactory.TABLEID_INDEX_NUM,",
"\t\t\tkeyRow,",
"\t\t\t(ScanQualifier [][]) null,",
"\t\t\tti,",
"\t\t\t(TupleDescriptor) null,",
"\t\t\tpermissionDescriptorsList,",
"\t\t\tfalse);",
"",
"\t\t/* Next, using each of the ColPermDescriptor's uuid, get the unique row ",
"\t\tin SYSCOLPERMS and expand the \"COLUMNS\" column in SYSCOLPERMS to ",
"\t\taccomodate the newly added column to the tableid*/",
"\t\tColPermsDescriptor colPermsDescriptor;",
"\t\tExecRow curRow;",
"\t\tExecIndexRow uuidKey;",
"\t\t// Not updating any indexes on SYSCOLPERMS",
"\t\tboolean[] bArray = new boolean[SYSCOLPERMSRowFactory.TOTAL_NUM_OF_INDEXES];",
"\t\tint[] colsToUpdate = {SYSCOLPERMSRowFactory.COLUMNS_COL_NUM};",
"\t\tfor (Iterator iterator = permissionDescriptorsList.iterator(); iterator.hasNext(); )",
"\t\t{",
"\t\t\tcolPermsDescriptor = (ColPermsDescriptor) iterator.next();",
"\t\t\tremovePermEntryInCache(colPermsDescriptor);",
"\t\t\tuuidKey = rf.buildIndexKeyRow(rf.COLPERMSID_INDEX_NUM, colPermsDescriptor);",
"\t\t\tcurRow=ti.getRow(tc, uuidKey, rf.COLPERMSID_INDEX_NUM);",
"\t FormatableBitSet columns = (FormatableBitSet) curRow.getColumn( ",
"\t\t\t\t\t SYSCOLPERMSRowFactory.COLUMNS_COL_NUM).getObject();",
"\t int currentLength = columns.getLength();",
"\t columns.grow(currentLength+1);",
"\t curRow.setColumn(SYSCOLPERMSRowFactory.COLUMNS_COL_NUM,",
"\t\t\t\t\t dvf.getDataValue((Object) columns));",
"\t\t\tti.updateRow(keyRow, curRow,",
"\t\t\t\t\tSYSCOLPERMSRowFactory.TABLEID_INDEX_NUM,",
"\t\t\t\t\t bArray, ",
"\t\t\t\t\t colsToUpdate,",
"\t\t\t\t\t tc);",
"\t\t}",
"\t}",
"",
"\t"
],
"header": "@@ -2439,6 +2439,84 @@ public final class\tDataDictionaryImpl",
"removed": []
},
{
"added": [],
"header": "@@ -2528,7 +2606,6 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tExecIndexRow newKey;"
]
},
{
"added": [],
"header": "@@ -2560,7 +2637,6 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tExecIndexRow newKey;"
]
}
]
}
] |
derby-DERBY-1852-0f0f8ade
|
DERBY-1852: Fix "modification of access paths" code in TableOperatorNode
so that the final query tree accurately reflects (and generates) the
necessary modified nodes. Patch also adds corresponding test cases
to lang/union.sql and updates master files accordingly.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@524940 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/SetOperatorNode.java",
"hunks": [
{
"added": [
"\t\t * have to generate a ProjectRestrictNode. Note: we want to check",
"\t\t * all SetOpNodes that exist in the subtree rooted at this SetOpNode.",
"\t\t * Since we just modified access paths on this node, it's possible",
"\t\t * that the SetOperatorNode chain (if there was one) is now \"broken\"",
"\t\t * as a result of the insertion of new nodes. For example, prior",
"\t\t * to modification of access paths we may have a chain such as:",
"\t\t *",
"\t\t * UnionNode (0)",
"\t\t * / \\",
"\t\t * UnionNode (1) SelectNode (2)",
"\t\t * / \\ ",
"\t\t * SelectNode (3) SelectNode (4)",
"\t\t *",
"\t\t * Now if UnionNode(1) did not specify \"ALL\" then as part of the",
"\t\t * above call to modifyAccessPaths() we will have inserted a",
"\t\t * DistinctNode above it, thus giving:",
"\t\t *",
"\t\t * UnionNode (0)",
"\t\t * / \\",
"\t\t * DistinctNode (5) SelectNode (2)",
"\t\t * |",
"\t\t * UnionNode (1)",
"\t\t * / \\ ",
"\t\t * SelectNode (3) SelectNode (4)",
"\t\t *",
"\t\t * So our chain of UnionNode's has now been \"broken\" by an intervening",
"\t\t * DistinctNode. For this reason we can't just walk the chain of",
"\t\t * SetOperatorNodes looking for unpushed predicates (because the",
"\t\t * chain might be broken and then we could miss some nodes). Instead,",
"\t\t * we have to get a collection of all relevant nodes that exist beneath",
"\t\t * this SetOpNode and call hasUnPushedPredicates() on each one. For",
"\t\t * now we only consider UnionNodes to be \"relevant\" because those are",
"\t\t * the only ones that might actually have unpushed predicates.",
"\t\t * ",
"\t\t * If we find any UnionNodes that *do* have unpushed predicates then",
"\t\t * we have to use a PRN to enforce the predicate at the level of",
"\t\t * this, the top-most, SetOperatorNode.",
"",
"\t\t// Find all UnionNodes in the subtree.",
"\t\tCollectNodesVisitor cnv = new CollectNodesVisitor(UnionNode.class);",
"\t\tthis.accept(cnv);",
"\t\tjava.util.Vector unions = cnv.getList();",
"",
"\t\t// Now see if any of them have unpushed predicates.",
"\t\tboolean genPRN = false;",
"\t\tfor (int i = unions.size() - 1; i >= 0; i--)",
"\t\t{",
"\t\t\tif (((UnionNode)unions.get(i)).hasUnPushedPredicates())",
"\t\t\t{",
"\t\t\t\tgenPRN = true;",
"\t\t\t\tbreak;",
"\t\t\t}",
"\t\t}",
"",
"\t\tif (genPRN)"
],
"header": "@@ -171,13 +171,62 @@ abstract class SetOperatorNode extends TableOperatorNode",
"removed": [
"\t\t * have to generate a ProjectRestrictNode. Note: we walk the",
"\t\t * entire chain of UnionNodes (if there is a chain) and see if",
"\t\t * any UnionNode at any level has un-pushed predicates; if so, then",
"\t\t * we use a PRN to enforce the predicate at this, the top-most",
"\t\t * UnionNode.",
"\t\tif (hasUnPushedPredicates())"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/TableOperatorNode.java",
"hunks": [
{
"added": [
"\t\t\t{",
"\t\t\t\t/* We know leftOptimizer's list of Optimizables consists of",
"\t\t\t\t * exactly one Optimizable, and we know that the Optimizable",
"\t\t\t\t * is actually leftResultSet (see optimizeSource() of this",
"\t\t\t\t * class). That said, the following call to modifyAccessPaths()",
"\t\t\t\t * will effectively replace leftResultSet as it exists in",
"\t\t\t\t * leftOptimizer's list with a \"modified\" node that *may* be",
"\t\t\t\t * different from the original leftResultSet--for example, it",
"\t\t\t\t * could be a new DISTINCT node whose child is the original",
"\t\t\t\t * leftResultSet. So after we've modified the node's access",
"\t\t\t\t * path(s) we have to explicitly set this.leftResulSet to",
"\t\t\t\t * point to the modified node. Otherwise leftResultSet would",
"\t\t\t\t * continue to point to the node as it existed *before* it was",
"\t\t\t\t * modified, and that could lead to incorrect behavior for",
"\t\t\t\t * certain queries. DERBY-1852.",
"\t\t\t\t */",
"\t\t\t\tleftResultSet = (ResultSetNode)",
"\t\t\t\t\t((OptimizerImpl)leftOptimizer)",
"\t\t\t\t\t\t.optimizableList.getOptimizable(0);",
"\t\t\t}"
],
"header": "@@ -98,7 +98,27 @@ abstract class TableOperatorNode extends FromTable",
"removed": []
},
{
"added": [
"\t\t\t{",
"\t\t\t\t/* For the same reasons outlined above we need to make sure",
"\t\t\t\t * we set rightResultSet to point to the *modified* right result",
"\t\t\t\t * set node, which sits at position \"0\" in rightOptimizer's",
"\t\t\t\t * list.",
"\t\t\t\t */",
"\t\t\t\trightResultSet = (ResultSetNode)",
"\t\t\t\t\t((OptimizerImpl)rightOptimizer)",
"\t\t\t\t\t\t.optimizableList.getOptimizable(0);",
"\t\t\t}"
],
"header": "@@ -115,7 +135,17 @@ abstract class TableOperatorNode extends FromTable",
"removed": []
},
{
"added": [
"\t\t\t{",
"\t\t\t\t/* We know leftOptimizer's list of Optimizables consists of",
"\t\t\t\t * exactly one Optimizable, and we know that the Optimizable",
"\t\t\t\t * is actually leftResultSet (see optimizeSource() of this",
"\t\t\t\t * class). That said, the following call to modifyAccessPaths()",
"\t\t\t\t * will effectively replace leftResultSet as it exists in",
"\t\t\t\t * leftOptimizer's list with a \"modified\" node that *may* be",
"\t\t\t\t * different from the original leftResultSet--for example, it",
"\t\t\t\t * could be a new DISTINCT node whose child is the original",
"\t\t\t\t * leftResultSet. So after we've modified the node's access",
"\t\t\t\t * path(s) we have to explicitly set this.leftResulSet to",
"\t\t\t\t * point to the modified node. Otherwise leftResultSet would",
"\t\t\t\t * continue to point to the node as it existed *before* it was",
"\t\t\t\t * modified, and that could lead to incorrect behavior for",
"\t\t\t\t * certain queries. DERBY-1852.",
"\t\t\t\t */",
"\t\t\t\tleftResultSet = (ResultSetNode)",
"\t\t\t\t\t((OptimizerImpl)leftOptimizer)",
"\t\t\t\t\t\t.optimizableList.getOptimizable(0);",
"\t\t\t}"
],
"header": "@@ -693,7 +723,27 @@ abstract class TableOperatorNode extends FromTable",
"removed": []
}
]
}
] |
derby-DERBY-1856-47dd4379
|
DERBY-1878
contributed by Sunitha Kambhampati
patch: derby1878.diff.txt
I am attaching the patch (derby1878.diff.txt) to improve some error handling in some of the network server tests.
1.The execCmdDumpResults used by the five tests timeslice.java,maxthreads.java,testij.java,runtimeinfo.java,sysinfo.java suffer from the same problems that was fixed for testProperties.java namely
-- the outputstream for the sub process's is not flushed out
-- there is no timeout handling for the ProcessStreamResult
2.Eliminate duplication of code in these 5 tests for execCmdDumpResults(String[] args) method. The execCmdDumpResults method basically exec's a new process and then waits for the process to dump the results. A new utility class - ExecProcUtil is added with execCmdDumpResults that is generalized to take the necessary input from the tests as well as fixes the issues mentioned in #1. The OutputStream is flushed out by calling bos.flush and System.out.flush and the timeout is added for ProcessStreamResult.
3.Make use of the TimedProcess to kill process if process does not exit within the timeout period.
4.TestConnection.java test has some variation of the execCmdDumpResults and it also adds some testcases into this method. Hence this method in this test is left as is and timeout handling is added.
5.testij.out has been updated. The previous master file was eating away the last line that was written to System.out, but now that the process's streams are flushed properly, the last line in testij.out test which prints out 'End test'
is also seen in the output file.
Also noticed that these tests - like timeslice.java, maxthreads.java all seem to set the properties for the server and then check if the property is set. The functionality of the server when these properties is set is not being tested. It will be good to add tests that test that the functionality itself is working as expected or not. Please note, recently connecting to server with timeslice options discovered some issues with server (see DERBY-1856).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@450508 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/ExecProcUtil.java",
"hunks": [
{
"added": [
"/*",
" ",
" Derby - Class org.apache.derbyTesting.functionTests.util.ExecProcUtil",
" ",
" Licensed to the Apache Software Foundation (ASF) under one or more",
" contributor license agreements. See the NOTICE file distributed with",
" this work for additional information regarding copyright ownership.",
" The ASF licenses this file to You under the Apache License, Version 2.0",
" (the \"License\"); you may not use this file except in compliance with",
" the License. You may obtain a copy of the License at",
" ",
" http://www.apache.org/licenses/LICENSE-2.0",
" ",
" Unless required by applicable law or agreed to in writing, software",
" distributed under the License is distributed on an \"AS IS\" BASIS,",
" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
" See the License for the specific language governing permissions and",
" limitations under the License.",
" ",
" */",
"package org.apache.derbyTesting.functionTests.util;",
"",
"import org.apache.derbyTesting.functionTests.harness.ProcessStreamResult;",
"import org.apache.derbyTesting.functionTests.harness.TimedProcess;",
"import java.util.Vector;",
"import java.io.BufferedOutputStream;",
"/**",
" * Utility class to hold helper methods to exec new processes",
" */",
"public class ExecProcUtil {",
" ",
" /**",
" * For each new exec process done, set ",
" * timeout for ProcessStreamResult after which the thread that ",
" * handles the streams for the process exits. Timeout is in minutes. ",
" * Note: timeout handling will only come into effect when ",
" * ProcessStreamResult#Wait() is called",
" */",
" private static String timeoutMinutes = \"2\";",
" ",
" /**",
" * timeout in seconds for the processes spawned.",
" */",
" private static int timeoutSecondsForProcess = 180;",
" ",
" /**",
" * Execute the given command and dump the results to standard out",
" *",
" * @param args command and arguments",
" * @param vCmd java command line arguments.",
" * @param bos buffered stream (System.out) to dump results to.",
" * @exception Exception",
" */",
" public static void execCmdDumpResults(String[] args, Vector vCmd,",
" BufferedOutputStream bos) throws Exception {",
" // We need the process inputstream and errorstream",
" ProcessStreamResult prout = null;",
" ProcessStreamResult prerr = null;",
"",
" StringBuffer sb = new StringBuffer();",
"",
" for (int i = 0; i < args.length; i++) {",
" sb.append(args[i] + \" \");",
" }",
" System.out.println(sb.toString());",
" int totalSize = vCmd.size() + args.length;",
" String serverCmd[] = new String[totalSize];",
"",
" int i = 0;",
" for (i = 0; i < vCmd.size(); i++)",
" serverCmd[i] = (String) vCmd.elementAt(i);",
"",
" for (int j = 0; i < totalSize; i++)",
" serverCmd[i] = args[j++];",
"",
" System.out.flush();",
" bos.flush();",
"",
" // Start a process to run the command",
" Process pr = Runtime.getRuntime().exec(serverCmd);",
"",
" // TimedProcess, kill process if process doesnt finish in a certain ",
" // amount of time",
" TimedProcess tp = new TimedProcess(pr);",
" prout = new ProcessStreamResult(pr.getInputStream(), bos,",
" timeoutMinutes);",
" prerr = new ProcessStreamResult(pr.getErrorStream(), bos,",
" timeoutMinutes);",
"",
" // wait until all the results have been processed",
" boolean outTimedOut = prout.Wait();",
" boolean errTimedOut = prerr.Wait();",
" ",
" // wait for this process to terminate, upto a wait period",
" // of 'timeoutSecondsForProcess'",
" // if process has already been terminated, this call will ",
" // return immediately.",
" tp.waitFor(timeoutSecondsForProcess);",
" pr = null;",
" ",
" if (outTimedOut || errTimedOut)",
" System.out.println(\" Reading from process streams timed out.. \");",
"",
" System.out.flush();",
" }",
" ",
"}"
],
"header": "@@ -0,0 +1,107 @@",
"removed": []
}
]
}
] |
derby-DERBY-1858-28c633d8
|
DERBY-1858
contributed by Yip Ng
patch: derby1858-trunk-diff02.txt
Fixes problem that DropSchemaNode's bind phase did not add the required schema
privilege for it to check at runtime.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@449869 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/StatementSchemaPermission.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.services.sanity.SanityManager;"
],
"header": "@@ -27,6 +27,7 @@ import org.apache.derby.iapi.reference.SQLState;",
"removed": []
},
{
"added": [
"\t/**",
"\t * The schema name ",
"\t */",
"\t/**",
"\t * Authorization id",
"\t */",
"\tprivate String aid; ",
"\t/**\t ",
"\t * One of Authorizer.CREATE_SCHEMA_PRIV, MODIFY_SCHEMA_PRIV, ",
"\t * DROP_SCHEMA_PRIV, etc.",
"\t */ ",
"\tprivate int privType; ",
"\tpublic StatementSchemaPermission(String schemaName, String aid, int privType)"
],
"header": "@@ -34,11 +35,21 @@ import org.apache.derby.iapi.store.access.TransactionController;",
"removed": [
"\tprivate String aid;",
"\tprivate boolean privType;",
"\tpublic StatementSchemaPermission(String schemaName, String aid, boolean privType)"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/DropSchemaNode.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.sql.compile.CompilerContext;",
"import org.apache.derby.iapi.sql.conn.Authorizer;"
],
"header": "@@ -21,6 +21,8 @@",
"removed": []
}
]
}
] |
derby-DERBY-1861-2bb13aca
|
DERBY-1861: ASSERT when combining references and expressions in ORDER BY
An ORDER BY clause wihch combines both column references and expressions
causes the sort engine to throw an ASSERT failure in sane builds.
The data structure problems that are exposed by DERBY-1861 both have to do
with the duplicate elimination processing. When the duplicate pulled-up
columns are eliminated from the result column list, the OrderByColumn and
ResultColumn instances may both end up with incorrect values.
The OrderByColumn class contains a field named addedColumnOffset. This
field records the offset of this particular OrderByColumn within the
portion of the result column list which contains pulled-up columns.
Each time a column is pulled up into the result column list, its
addedColumnOffset is set; thus the first pulled-up column has
addedColumnOffset = 0, the second pulled-up column has
addedColumnOffset = 1, etc.
However, later, when duplicate pulled-up result columns are detected
and removed by bind processing, the addedColumnOffset field is not
re-adjusted, and ends up with an invalid value.
The ResultColumn class contains a field named virtualColumnId. For columns
which are not directly from the underlying table, but rather are the result
of expressions that are computed at runtime, the columns are assigned a
virtualColumnId. For reasons similar to those of the addedColumnOffset,
this field also ends up wiht an invalid value when the duplicate
pulled-up columns are detected and removed from the result column list.
I decided that the best thing was to arrange to call each of the
OrderByColumn instances and ResultColumn instances at the point that
the duplicate result column is detected and removed, to give each of
those objects a chance to adjust its addedColumnOffset and
virtualColumnId value to reflect the removed column. Although this change
required a number of small changes, none of them was terribly complicated,
and the effect of the fix is that the data structures are as desired.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@520038 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/OrderByColumn.java",
"hunks": [
{
"added": [
"\tprivate OrderByList list;"
],
"header": "@@ -46,6 +46,7 @@ public class OrderByColumn extends OrderedColumn {",
"removed": []
},
{
"added": [
"\t * During binding, we may discover that this order by column was pulled",
"\t * up into the result column list, but is now a duplicate, because the",
"\t * actual result column was expanded into the result column list when \"*\"",
"\t * expressions were replaced with the list of the table's columns. In such",
"\t * a situation, we will end up calling back to the OrderByList to",
"\t * adjust the addedColumnOffset values of the columns; the \"oblist\"",
"\t * parameter exists to allow that callback to be performed.",
"\t *",
"\t * @param oblist OrderByList which contains this column",
"\tpublic void bindOrderByColumn(ResultSetNode target, OrderByList oblist)",
"\t\tthis.list = oblist;",
""
],
"header": "@@ -140,14 +141,25 @@ public class OrderByColumn extends OrderedColumn {",
"removed": [
"\tpublic void bindOrderByColumn(ResultSetNode target)"
]
},
{
"added": [
"\t\t\tresultCol = targetCols.findResultColumnForOrderBy(",
" cr.getColumnName(), cr.getTableNameNode());"
],
"header": "@@ -201,8 +213,8 @@ public class OrderByColumn extends OrderedColumn {",
"removed": [
"\t\t\tresultCol = targetCols.getOrderByColumn(cr.getColumnName(),",
" cr.getTableNameNode());"
]
},
{
"added": [
"\t\tresultCol = targetCols.getOrderByColumnToBind(cr.getColumnName(),",
"\t\t\t\t\t\t\tsourceTableNumber,",
"\t\t\t\t\t\t\tthis);"
],
"header": "@@ -333,9 +345,10 @@ public class OrderByColumn extends OrderedColumn {",
"removed": [
"\t\tresultCol = targetCols.getOrderByColumn(cr.getColumnName(),",
"\t\t\t\t\t\t\tsourceTableNumber);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/OrderByList.java",
"hunks": [
{
"added": [
"\t\t\tobc.bindOrderByColumn(target, this);"
],
"header": "@@ -150,7 +150,7 @@ public class OrderByList extends OrderedColumnList",
"removed": [
"\t\t\tobc.bindOrderByColumn(target);"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/compile/ResultColumnList.java",
"hunks": [
{
"added": [
" * later in getOrderByColumnToBind we determine that these are",
" * duplicates and we take them back out again."
],
"header": "@@ -89,8 +89,8 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
" * later in getOrderByColumn we determine that these are duplicates and",
" * we take them back out again."
]
},
{
"added": [
"\t * For order by column bind, get a ResultColumn that matches the specified ",
"\t * This method is called during bind processing, in the special",
"\t * \"bind the order by\" call that is made by CursorNode.bindStatement().",
"\t * The OrderByList has a special set of bind processing routines",
"\t * that analyzes the columns in the ORDER BY list and verifies that",
"\t * each column is one of:",
"\t * - a direct reference to a column explicitly mentioned in",
"\t * the SELECT list",
"\t * - a direct reference to a column implicitly mentioned as \"SELECT *\"",
"\t * - a direct reference to a column \"pulled up\" into the result",
"\t * column list",
"\t * - or a valid and fully-bound expression (\"c+2\", \"YEAR(hire_date)\", etc.)",
"\t *",
"\t * At this point in the processing, it is possible that we'll find",
"\t * the column present in the RCL twice: once because it was pulled",
"\t * up during statement compilation, and once because it was added",
"\t * when \"SELECT *\" was expanded into the table's actual column list.",
"\t * If we find such a duplicated column, we can, and do, remove the",
"\t * pulled-up copy of the column and point the OrderByColumn",
"\t * to the actual ResultColumn from the *-expansion.",
"\t *",
"\t * Note that the association of the OrderByColumn with the",
"\t * corresponding ResultColumn in the RCL occurs in",
"\t * OrderByColumn.resolveAddedColumn.",
"\t *",
"\t * @param obc The OrderByColumn we're binding.",
"\tpublic ResultColumn getOrderByColumnToBind(",
" String columnName,",
" TableName tableName,",
" int tableNumber,",
" OrderByColumn obc)"
],
"header": "@@ -385,18 +385,47 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t * For order by, get a ResultColumn that matches the specified ",
"\tpublic ResultColumn getOrderByColumn(String columnName, TableName tableName, int tableNumber)"
]
},
{
"added": [
"\t\t\t\t\tobc.clearAddedColumnOffset();",
"\t\t\t\t\tcollapseVirtualColumnIdGap(",
"\t\t\t\t\t\t\tresultColumn.getColumnPosition());"
],
"header": "@@ -455,6 +484,9 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": []
},
{
"added": [
"\t/**",
"\t * Adjust virtualColumnId values due to result column removal",
"\t *",
"\t * This method is called when a duplicate column has been detected and",
"\t * removed from the list. We iterate through each of the other columns",
"\t * in the list and notify them of the column removal so they can adjust",
"\t * their virtual column id if necessary.",
"\t *",
"\t * @param gap id of the column which was just removed.",
"\t */",
"\tprivate void collapseVirtualColumnIdGap(int gap)",
"\t{",
"\t\tfor (int index = 0; index < size(); index++)",
"\t\t\t((ResultColumn) elementAt(index)).collapseVirtualColumnIdGap(gap);",
"\t}",
"",
"\t * This method is called during pull-up processing, at the very",
"\t * start of bind processing, as part of",
"\t * OrderByList.pullUpOrderByColumns. Its job is to figure out",
"\t * whether the provided column (from the ORDER BY list) already",
"\t * exists in the ResultColumnList or not. If the column does",
"\t * not exist in the RCL, we return NULL, which signifies that",
"\t * a new ResultColumn should be generated and added (\"pulled up\")",
"\t * to the RCL by our caller.",
"\t *",
"\t * Note that at this point in the processing, we should never",
"\t * find this column present in the RCL multiple times; if the",
"\t * column is already present in the RCL, then we don't need to,",
"\t * and won't, pull a new ResultColumn up into the RCL.",
"\t *",
"\t * If the caller specified \"SELECT *\", then the RCL at this",
"\t * point contains a special AllResultColumn object. This object",
"\t * will later be expanded and replaced by the actual set of",
"\t * columns in the table, but at this point we don't know what",
"\t * those columns are, so we may pull up an OrderByColumn",
"\t * which duplicates a column in the *-expansion; such",
"\t * duplicates will be removed at the end of bind processing",
"\t * by OrderByList.bindOrderByColumns.",
"\t *",
"\t * @return\tthe column that matches that name, or NULL if pull-up needed",
"\tpublic ResultColumn findResultColumnForOrderBy(",
" String columnName, TableName tableName)"
],
"header": "@@ -462,18 +494,58 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t * @return\tthe column that matches that name.",
"\tpublic ResultColumn getOrderByColumn(String columnName, TableName tableName)"
]
},
{
"added": [
"\t\t\t\t{",
"\t\t\t\t\tSanityManager.THROWASSERT(",
"\t\t\t\t\t\t\t\"Unexpectedly found ORDER BY column '\" +",
"\t\t\t\t\t\t\tcolumnName + \"' pulled up at position \" +index);"
],
"header": "@@ -513,10 +585,10 @@ public class ResultColumnList extends QueryTreeNodeVector",
"removed": [
"\t\t\t\t{// remove the column due to pullup of orderby item",
"\t\t\t\t\tremoveElement(resultColumn);",
"\t\t\t\t\tdecOrderBySelect();",
"\t\t\t\t\tbreak;"
]
}
]
}
] |
derby-DERBY-1862-d52f8785
|
DERBY-1862 Patch makes a map of column names to column number. The map is populated when the first call to findColumn is made.
Patch contributed by Andreas Korneliussen [email protected]
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@448949 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.util.ReuseFactory;"
],
"header": "@@ -57,6 +57,7 @@ import org.apache.derby.iapi.reference.JDBC20Translation;",
"removed": []
},
{
"added": [
"import java.util.Map;",
"import java.util.HashMap;"
],
"header": "@@ -77,6 +78,8 @@ import java.io.InputStream;",
"removed": []
},
{
"added": [
"\t",
"\t/**",
"\t * A map which maps a column name to a column number.",
"\t * Entries only added when accessing columns with the name.",
"\t */",
"\tprivate Map columnNameMap;"
],
"header": "@@ -145,6 +148,12 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": []
},
{
"added": [
"\t\tthis.columnNameMap = null;"
],
"header": "@@ -260,6 +269,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": []
},
{
"added": [
"\t\t",
"\t\tfinal Map workMap; ",
"\t\t ",
"\t\tsynchronized (this) {",
"\t\t\tif (columnNameMap==null) {",
"\t\t\t\t// updateXXX and getXXX methods are case insensitive and the ",
"\t\t\t\t// first column should be returned. The loop goes backward to ",
"\t\t\t\t// create a map which preserves this property.",
"\t\t\t\tcolumnNameMap = new HashMap();",
"\t\t\t\tfor (int i = resultDescription.getColumnCount(); i>=1; i--) {",
"\t\t\t\t\t",
"\t\t\t\t\tfinal String key = StringUtil.",
"\t\t\t\t\t\tSQLToUpperCase(resultDescription.",
"\t\t\t\t\t\t\tgetColumnDescriptor(i).getName());",
"\t\t\t\t\t",
"\t\t\t\t\tfinal Integer value = ReuseFactory.getInteger(i);",
"\t\t\t\t\t",
"\t\t\t\t\tcolumnNameMap.put(key, value);",
"\t\t\t\t}",
"\t\t\t}",
"\t\t\tworkMap = columnNameMap;",
"\t\t}",
"\t\t",
"\t\tInteger val = (Integer) workMap.get(columnName);",
"\t\tif (val==null) {",
"\t\t\tval = (Integer) workMap.get(StringUtil.SQLToUpperCase(columnName));",
"\t\t}",
"\t\tif (val==null) {",
"\t\t\tthrow newSQLException(SQLState.COLUMN_NOT_FOUND, columnName);",
"\t\t} else {",
"\t\t\treturn val.intValue();",
"\t\t}"
],
"header": "@@ -4225,29 +4235,41 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t// REVISIT: we might want to cache our own info...",
"\t\t",
"",
"\t\tResultDescription rd = resultDescription;",
"",
" \t// 1 or 0 based? assume 1 (probably wrong)",
" // Changing the order in which columns are found from 1 till column count.",
" // This is necessary in cases where the column names are the same but are in different cases.",
" // This is because in updateXXX and getXXX methods column names are case insensitive",
" // and in that case the first column should be returned.",
" ",
" int columnCount = rd.getColumnCount();",
"",
" for(int i = 1 ; i<= columnCount;i++) {",
" \t\tString name = rd.getColumnDescriptor(i).getName();",
" \t\tif (StringUtil.SQLEqualsIgnoreCase(columnName, name)) {",
" \t\t\treturn i;",
" \t\t}",
" \t}",
" \tthrow newSQLException(SQLState.COLUMN_NOT_FOUND, columnName);"
]
}
]
}
] |
derby-DERBY-1866-cdd73ccf
|
DERBY-1866
contributed by Army Brown
patch: d1866_v1.patch
Attaching a first patch for this issue, d1866_v1.patch. In short, the problem was that, when pushing predicates to subqueries beneath UNIONs, the predicates were always being pushed to the *first* table in the subquery's FROM list, regardless of whether or not that was actually the correct table. Thus it was possible to push a predicate down to a base table to which it didn't apply, thereby leading to an assertion failure in sane mode and an index out of bounds exception in insane mode.
For details on how this occurred and what the fix is, please refer to the code comments in the patch. The d1866_v1 patch does the following:
1. Adds logic to ensure scoped predicates are only pushed
to the appropriate base tables.
2. Adds one line to OptimizerImpl to solve the hang that
was occuring for the second query shown in repro.sql.
The problem there was just that one variable was not
being properly reset when beginning a new round of
optimization.
3. Adds some test cases to verify the changes for #1 and
#2.
Note that the patch is mostly just explanatory comments for existing and new logic, plus the test cases.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@450155 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/OptimizerImpl.java",
"hunks": [
{
"added": [
"",
"\t\t/* If user specified the optimizer override for a fixed",
"\t\t * join order, then desiredJoinOrderFound could be true",
"\t\t * when we get here. We have to reset it to false in",
"\t\t * prep for the next round of optimization. Otherwise",
"\t\t * we'd end up quitting the optimization before ever",
"\t\t * finding a plan for this round, and that could, among",
"\t\t * other things, lead to a never-ending optimization",
"\t\t * phase in certain situations. DERBY-1866.",
"\t\t */",
"\t\tdesiredJoinOrderFound = false;"
],
"header": "@@ -343,6 +343,17 @@ public class OptimizerImpl implements Optimizer",
"removed": []
},
{
"added": [
"\t\tint\t\t numPreds = predicateList.size();",
"\t\tJBitSet\t predMap = new JBitSet(numTablesInQuery);",
"\t\tJBitSet curTableNums = null;",
"\t\tBaseTableNumbersVisitor btnVis = null;",
"\t\tboolean pushPredNow = false;",
"\t\tint tNum;",
"\t\tPredicate pred;"
],
"header": "@@ -1236,9 +1247,13 @@ public class OptimizerImpl implements Optimizer",
"removed": [
"\t\tint\t\tnumPreds = predicateList.size();",
"\t\tJBitSet\tpredMap = new JBitSet(numTablesInQuery);",
"\t\tOptimizablePredicate pred;"
]
},
{
"added": [
"\t\t\tpred = (Predicate)predicateList.getOptPredicate(predCtr);"
],
"header": "@@ -1249,7 +1264,7 @@ public class OptimizerImpl implements Optimizer",
"removed": [
"\t\t\tpred = predicateList.getOptPredicate(predCtr);"
]
}
]
}
] |
derby-DERBY-1876-b408ff8d
|
DERBY-1876 Change currentRow in EmbedResultSet to be null if not on current row, otherwise be a reference to the current row of the top-level language result set. Avoids an object allocation per-EmbedResultSet that was never used.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@566311 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [],
"header": "@@ -51,13 +51,10 @@ import org.apache.derby.iapi.services.io.StreamStorable;",
"removed": [
"import org.apache.derby.iapi.services.io.LimitReader;",
"import org.apache.derby.iapi.util.StringUtil;",
"import org.apache.derby.iapi.util.ReuseFactory;"
]
},
{
"added": [],
"header": "@@ -78,8 +75,6 @@ import java.io.InputStream;",
"removed": [
"import java.util.Map;",
"import java.util.HashMap;"
]
},
{
"added": [
"\t * If currentRow is null, the cursor is not postioned on a row ",
"\tprivate ExecRow currentRow;\t"
],
"header": "@@ -106,10 +101,9 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t * If the containing row array is null, the cursor is not postioned on a ",
"\t * row ",
"\tprivate final ExecRow currentRow;\t"
]
},
{
"added": [
"\t\t",
" final int columnCount = resultDescription.getColumnCount();",
" final ExecutionFactory factory = conn.getLanguageConnection().",
" getLanguageConnectionFactory().getExecutionFactory();",
" "
],
"header": "@@ -257,16 +251,15 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\tfinal ExecutionFactory factory = conn.getLanguageConnection().",
"\t\t\tgetLanguageConnectionFactory().getExecutionFactory();",
"\t\tfinal int columnCount = resultDescription.getColumnCount();",
"\t\tthis.currentRow = factory.getValueRow(columnCount);",
"\t\tcurrentRow.setRowArray(null);",
""
]
},
{
"added": [
" else",
" {",
" updateRow = null;",
" }"
],
"header": "@@ -279,9 +272,11 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t} else {",
"\t\t\tupdateRow = null;"
]
},
{
"added": [
"\t\tif (currentRow == null) {"
],
"header": "@@ -324,7 +319,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\tif (currentRow.getRowArray() == null) {"
]
},
{
"added": [
" boolean onRow = (currentRow = newRow) != null;\t\t\t"
],
"header": "@@ -491,13 +486,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t\tboolean onRow = (newRow!=null);",
"\t\t\tif (onRow) {",
"\t\t\t\tcurrentRow.setRowArray(newRow.getRowArray());",
"\t\t\t} else {",
"\t\t\t\tcurrentRow.setRowArray(null);",
"\t\t\t}",
"\t\t\t"
]
},
{
"added": [
"\t\t\tcurrentRow = null;"
],
"header": "@@ -636,7 +625,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t\tcurrentRow.setRowArray(null);"
]
},
{
"added": [
" currentRow = null;"
],
"header": "@@ -3753,7 +3742,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" currentRow.setRowArray(null);"
]
},
{
"added": [
" currentRow = null;"
],
"header": "@@ -3817,7 +3806,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" currentRow.setRowArray(null);"
]
},
{
"added": [
"\t if (columnIndex < 1 || columnIndex > resultDescription.getColumnCount()) {"
],
"header": "@@ -4380,7 +4369,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t if (columnIndex < 1 || columnIndex > currentRow.nColumns()) {"
]
}
]
}
] |
derby-DERBY-1876-c6012226
|
DERBY-1876: Move conversion of query timeout to milliseconds out of
EmbedResultSet's constructor, and get column count without creating a
meta-data object
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@522445 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [
" private final long timeoutMillis;"
],
"header": "@@ -183,7 +183,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" private long timeoutMillis;"
]
},
{
"added": [
" : stmt.timeoutMillis;"
],
"header": "@@ -232,7 +232,7 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" : (long)stmt.getQueryTimeout() * 1000L;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedStatement.java",
"hunks": [
{
"added": [
"\t/**",
"\t * Query timeout in milliseconds. By default, no statements time",
"\t * out. Timeout is set explicitly with setQueryTimeout().",
"\t */",
" long timeoutMillis;"
],
"header": "@@ -94,7 +94,11 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" private int timeoutSeconds;"
]
},
{
"added": [],
"header": "@@ -128,10 +132,6 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
"",
" // By default, no statements time out.",
" // Timeout is set explicitly with setQueryTimeout().",
" timeoutSeconds = 0;"
]
},
{
"added": [
" return (int) (timeoutMillis / 1000);"
],
"header": "@@ -406,7 +406,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" return timeoutSeconds;"
]
},
{
"added": [
" timeoutMillis = (long) seconds * 1000;"
],
"header": "@@ -423,7 +423,7 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" timeoutSeconds = seconds;"
]
},
{
"added": [],
"header": "@@ -1177,7 +1177,6 @@ public class EmbedStatement extends ConnectionChild",
"removed": [
" long timeoutMillis = (long)timeoutSeconds * 1000L;"
]
}
]
}
] |
derby-DERBY-1876-f512b2fc
|
DERBY-1879 Save the EmbedResultSetMetaData object and the case-insensitive column name map in the ResultDescription object
and not EmbedResultSet. This means these objects are created once per compiled plan (as needed) and not once per
EmbedResultSet (as needed). This improves the performance by reducing the overhead for simple queries (DERBY-1876).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@450607 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/ResultDescription.java",
"hunks": [
{
"added": [
"\t * copy. The saved JDBC ResultSetMetaData will",
" * not be copied."
],
"header": "@@ -74,7 +74,8 @@ public interface ResultDescription",
"removed": [
"\t * copy."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [],
"header": "@@ -129,7 +129,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\tprivate ResultSetMetaData rMetaData;"
]
},
{
"added": [],
"header": "@@ -148,12 +147,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t",
"\t/**",
"\t * A map which maps a column name to a column number.",
"\t * Entries only added when accessing columns with the name.",
"\t */",
"\tprivate Map columnNameMap;"
]
},
{
"added": [],
"header": "@@ -269,7 +262,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\tthis.columnNameMap = null;"
]
},
{
"added": [],
"header": "@@ -642,7 +634,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t\trMetaData = null; // let it go, we can make a new one"
]
},
{
"added": [
" public final ResultSetMetaData getMetaData() throws SQLException {",
"\t\t\t\t\t\t\t\t// on the underlying connection.",
" ResultSetMetaData rMetaData =",
" resultDescription.getMetaData();",
"\t\t\t// save this object at the plan level",
"\t\t\trMetaData = factory.newEmbedResultSetMetaData(",
" resultDescription.getColumnInfo());",
" resultDescription.setMetaData(rMetaData);"
],
"header": "@@ -1612,21 +1603,21 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" public ResultSetMetaData getMetaData() throws SQLException {",
"\t\t\t\t\t\t\t\t// on the underlying connection. Do this",
"\t\t\t\t\t\t\t\t// outside of the connection synchronization.",
"",
"\t synchronized (getConnectionSynchronization()) {",
"\t\t\t// cache this object and keep returning it",
"\t\t\trMetaData = newEmbedResultSetMetaData(resultDescription);",
"\t }"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSetMetaData.java",
"hunks": [
{
"added": [
" * We take the (Derby) ResultDescription and examine it, to return",
" <P>",
" EmbedResultSetMetaData objects are shared across multiple threads",
" by being stored in the ResultDescription for a compiled plan.",
" If the required api for ResultSetMetaData ever changes so",
" that it has a close() method, a getConnection() method or",
" any other Connection or ResultSet specific method then",
" this sharing must be removed."
],
"header": "@@ -41,12 +41,19 @@ import java.sql.ResultSet;",
"removed": [
" * We take the (cloudscape) ResultDescription and examine it, to return"
]
},
{
"added": [
"\tpublic final int getColumnCount()\t{",
"\t\treturn columnInfo.length;"
],
"header": "@@ -71,8 +78,8 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getColumnCount()\t{",
"\t\treturn columnInfo == null ? 0 : columnInfo.length;"
]
},
{
"added": [
"\tpublic final boolean isAutoIncrement(int column) throws SQLException\t{",
" validColumnNumber(column);"
],
"header": "@@ -83,8 +90,8 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isAutoIncrement(int column) throws SQLException\t{",
""
]
},
{
"added": [
"\tpublic final boolean isCaseSensitive(int column) throws SQLException\t{"
],
"header": "@@ -96,7 +103,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isCaseSensitive(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isSearchable(int column) throws SQLException\t{"
],
"header": "@@ -108,7 +115,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isSearchable(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isCurrency(int column) throws SQLException\t{"
],
"header": "@@ -123,7 +130,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isCurrency(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int isNullable(int column) throws SQLException\t{"
],
"header": "@@ -135,7 +142,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int isNullable(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isSigned(int column) throws SQLException\t{"
],
"header": "@@ -146,7 +153,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isSigned(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int getColumnDisplaySize(int column) throws SQLException\t{"
],
"header": "@@ -158,7 +165,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getColumnDisplaySize(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final String getColumnLabel(int column) throws SQLException {"
],
"header": "@@ -170,7 +177,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getColumnLabel(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final String getColumnName(int column) throws SQLException\t{"
],
"header": "@@ -186,7 +193,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getColumnName(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final String getSchemaName(int column) throws SQLException\t{"
],
"header": "@@ -202,7 +209,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getSchemaName(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int getPrecision(int column) throws SQLException\t{"
],
"header": "@@ -217,7 +224,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getPrecision(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int getScale(int column) throws SQLException\t{"
],
"header": "@@ -229,7 +236,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getScale(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final String getTableName(int column) throws SQLException {"
],
"header": "@@ -241,7 +248,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getTableName(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final String getCatalogName(int column) throws SQLException {"
],
"header": "@@ -256,7 +263,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getCatalogName(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final int getColumnType(int column) throws SQLException {"
],
"header": "@@ -269,7 +276,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getColumnType(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final String getColumnTypeName(int column) throws SQLException\t{"
],
"header": "@@ -281,7 +288,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getColumnTypeName(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isReadOnly(int column) throws SQLException {"
],
"header": "@@ -293,7 +300,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isReadOnly(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final boolean isWritable(int column) throws SQLException {"
],
"header": "@@ -307,7 +314,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isWritable(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final boolean isDefinitelyWritable(int column) throws SQLException\t{"
],
"header": "@@ -319,7 +326,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isDefinitelyWritable(int column) throws SQLException\t{"
]
},
{
"added": [
"\tprivate DataTypeDescriptor getColumnTypeDescriptor(int column) throws SQLException "
],
"header": "@@ -337,7 +344,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic DataTypeDescriptor getColumnTypeDescriptor(int column) throws SQLException "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/GenericResultDescription.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.SQLState;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
},
{
"added": [
"import org.apache.derby.iapi.util.ReuseFactory;",
"import org.apache.derby.iapi.util.StringUtil;",
"import java.sql.ResultSetMetaData;",
"import java.util.Collections;",
"import java.util.HashMap;",
"import java.util.Map;",
""
],
"header": "@@ -29,10 +30,17 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": []
},
{
"added": [
" ",
" /**",
" * Saved JDBC ResultSetMetaData object.",
" * @see ResultDescription#setMetaData(java.sql.ResultSetMetaData)",
" */",
" private transient ResultSetMetaData metaData;",
" ",
" /**",
" * A map which maps a column name to a column number.",
" * Entries only added when accessing columns with the name.",
" */",
" private Map columnNameMap;"
],
"header": "@@ -61,6 +69,18 @@ public final class GenericResultDescription",
"removed": []
},
{
"added": [
"",
" /**",
" * Set the meta data if it has not already been set.",
" */",
" public synchronized void setMetaData(ResultSetMetaData rsmd) {",
" if (metaData == null)",
" metaData = rsmd;",
" }",
"",
" /**",
" * Get the saved meta data.",
" */",
" public synchronized ResultSetMetaData getMetaData() {",
" return metaData;",
" }",
"",
" /**",
" * Find a column name based upon the JDBC rules for",
" * getXXX and setXXX. Name matching is case-insensitive,",
" * matching the first name (1-based) if there are multiple",
" * columns that map to the same name.",
" */",
" public int findColumnInsenstive(String columnName) {",
" ",
" final Map workMap; ",
" ",
" synchronized (this) {",
" if (columnNameMap==null) {",
" // updateXXX and getXXX methods are case insensitive and the ",
" // first column should be returned. The loop goes backward to ",
" // create a map which preserves this property.",
" Map map = new HashMap();",
" for (int i = getColumnCount(); i>=1; i--) {",
" ",
" final String key = StringUtil.",
" SQLToUpperCase(",
" getColumnDescriptor(i).getName());",
" ",
" final Integer value = ReuseFactory.getInteger(i);",
" ",
" map.put(key, value);",
" }",
" ",
" // Ensure this map can never change.",
" columnNameMap = Collections.unmodifiableMap(map);",
" }",
" workMap = columnNameMap;",
" }",
" ",
" Integer val = (Integer) workMap.get(columnName);",
" if (val==null) {",
" val = (Integer) workMap.get(StringUtil.SQLToUpperCase(columnName));",
" }",
" if (val==null) {",
" return -1;",
" } else {",
" return val.intValue();",
" }",
" }"
],
"header": "@@ -257,5 +277,64 @@ public final class GenericResultDescription",
"removed": []
}
]
}
] |
derby-DERBY-1878-47dd4379
|
DERBY-1878
contributed by Sunitha Kambhampati
patch: derby1878.diff.txt
I am attaching the patch (derby1878.diff.txt) to improve some error handling in some of the network server tests.
1.The execCmdDumpResults used by the five tests timeslice.java,maxthreads.java,testij.java,runtimeinfo.java,sysinfo.java suffer from the same problems that was fixed for testProperties.java namely
-- the outputstream for the sub process's is not flushed out
-- there is no timeout handling for the ProcessStreamResult
2.Eliminate duplication of code in these 5 tests for execCmdDumpResults(String[] args) method. The execCmdDumpResults method basically exec's a new process and then waits for the process to dump the results. A new utility class - ExecProcUtil is added with execCmdDumpResults that is generalized to take the necessary input from the tests as well as fixes the issues mentioned in #1. The OutputStream is flushed out by calling bos.flush and System.out.flush and the timeout is added for ProcessStreamResult.
3.Make use of the TimedProcess to kill process if process does not exit within the timeout period.
4.TestConnection.java test has some variation of the execCmdDumpResults and it also adds some testcases into this method. Hence this method in this test is left as is and timeout handling is added.
5.testij.out has been updated. The previous master file was eating away the last line that was written to System.out, but now that the process's streams are flushed properly, the last line in testij.out test which prints out 'End test'
is also seen in the output file.
Also noticed that these tests - like timeslice.java, maxthreads.java all seem to set the properties for the server and then check if the property is set. The functionality of the server when these properties is set is not being tested. It will be good to add tests that test that the functionality itself is working as expected or not. Please note, recently connecting to server with timeslice options discovered some issues with server (see DERBY-1856).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@450508 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/testing/org/apache/derbyTesting/functionTests/util/ExecProcUtil.java",
"hunks": [
{
"added": [
"/*",
" ",
" Derby - Class org.apache.derbyTesting.functionTests.util.ExecProcUtil",
" ",
" Licensed to the Apache Software Foundation (ASF) under one or more",
" contributor license agreements. See the NOTICE file distributed with",
" this work for additional information regarding copyright ownership.",
" The ASF licenses this file to You under the Apache License, Version 2.0",
" (the \"License\"); you may not use this file except in compliance with",
" the License. You may obtain a copy of the License at",
" ",
" http://www.apache.org/licenses/LICENSE-2.0",
" ",
" Unless required by applicable law or agreed to in writing, software",
" distributed under the License is distributed on an \"AS IS\" BASIS,",
" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
" See the License for the specific language governing permissions and",
" limitations under the License.",
" ",
" */",
"package org.apache.derbyTesting.functionTests.util;",
"",
"import org.apache.derbyTesting.functionTests.harness.ProcessStreamResult;",
"import org.apache.derbyTesting.functionTests.harness.TimedProcess;",
"import java.util.Vector;",
"import java.io.BufferedOutputStream;",
"/**",
" * Utility class to hold helper methods to exec new processes",
" */",
"public class ExecProcUtil {",
" ",
" /**",
" * For each new exec process done, set ",
" * timeout for ProcessStreamResult after which the thread that ",
" * handles the streams for the process exits. Timeout is in minutes. ",
" * Note: timeout handling will only come into effect when ",
" * ProcessStreamResult#Wait() is called",
" */",
" private static String timeoutMinutes = \"2\";",
" ",
" /**",
" * timeout in seconds for the processes spawned.",
" */",
" private static int timeoutSecondsForProcess = 180;",
" ",
" /**",
" * Execute the given command and dump the results to standard out",
" *",
" * @param args command and arguments",
" * @param vCmd java command line arguments.",
" * @param bos buffered stream (System.out) to dump results to.",
" * @exception Exception",
" */",
" public static void execCmdDumpResults(String[] args, Vector vCmd,",
" BufferedOutputStream bos) throws Exception {",
" // We need the process inputstream and errorstream",
" ProcessStreamResult prout = null;",
" ProcessStreamResult prerr = null;",
"",
" StringBuffer sb = new StringBuffer();",
"",
" for (int i = 0; i < args.length; i++) {",
" sb.append(args[i] + \" \");",
" }",
" System.out.println(sb.toString());",
" int totalSize = vCmd.size() + args.length;",
" String serverCmd[] = new String[totalSize];",
"",
" int i = 0;",
" for (i = 0; i < vCmd.size(); i++)",
" serverCmd[i] = (String) vCmd.elementAt(i);",
"",
" for (int j = 0; i < totalSize; i++)",
" serverCmd[i] = args[j++];",
"",
" System.out.flush();",
" bos.flush();",
"",
" // Start a process to run the command",
" Process pr = Runtime.getRuntime().exec(serverCmd);",
"",
" // TimedProcess, kill process if process doesnt finish in a certain ",
" // amount of time",
" TimedProcess tp = new TimedProcess(pr);",
" prout = new ProcessStreamResult(pr.getInputStream(), bos,",
" timeoutMinutes);",
" prerr = new ProcessStreamResult(pr.getErrorStream(), bos,",
" timeoutMinutes);",
"",
" // wait until all the results have been processed",
" boolean outTimedOut = prout.Wait();",
" boolean errTimedOut = prerr.Wait();",
" ",
" // wait for this process to terminate, upto a wait period",
" // of 'timeoutSecondsForProcess'",
" // if process has already been terminated, this call will ",
" // return immediately.",
" tp.waitFor(timeoutSecondsForProcess);",
" pr = null;",
" ",
" if (outTimedOut || errTimedOut)",
" System.out.println(\" Reading from process streams timed out.. \");",
"",
" System.out.flush();",
" }",
" ",
"}"
],
"header": "@@ -0,0 +1,107 @@",
"removed": []
}
]
}
] |
derby-DERBY-1879-f512b2fc
|
DERBY-1879 Save the EmbedResultSetMetaData object and the case-insensitive column name map in the ResultDescription object
and not EmbedResultSet. This means these objects are created once per compiled plan (as needed) and not once per
EmbedResultSet (as needed). This improves the performance by reducing the overhead for simple queries (DERBY-1876).
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@450607 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/ResultDescription.java",
"hunks": [
{
"added": [
"\t * copy. The saved JDBC ResultSetMetaData will",
" * not be copied."
],
"header": "@@ -74,7 +74,8 @@ public interface ResultDescription",
"removed": [
"\t * copy."
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSet.java",
"hunks": [
{
"added": [],
"header": "@@ -129,7 +129,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\tprivate ResultSetMetaData rMetaData;"
]
},
{
"added": [],
"header": "@@ -148,12 +147,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t",
"\t/**",
"\t * A map which maps a column name to a column number.",
"\t * Entries only added when accessing columns with the name.",
"\t */",
"\tprivate Map columnNameMap;"
]
},
{
"added": [],
"header": "@@ -269,7 +262,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\tthis.columnNameMap = null;"
]
},
{
"added": [],
"header": "@@ -642,7 +634,6 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
"\t\t\trMetaData = null; // let it go, we can make a new one"
]
},
{
"added": [
" public final ResultSetMetaData getMetaData() throws SQLException {",
"\t\t\t\t\t\t\t\t// on the underlying connection.",
" ResultSetMetaData rMetaData =",
" resultDescription.getMetaData();",
"\t\t\t// save this object at the plan level",
"\t\t\trMetaData = factory.newEmbedResultSetMetaData(",
" resultDescription.getColumnInfo());",
" resultDescription.setMetaData(rMetaData);"
],
"header": "@@ -1612,21 +1603,21 @@ public abstract class EmbedResultSet extends ConnectionChild",
"removed": [
" public ResultSetMetaData getMetaData() throws SQLException {",
"\t\t\t\t\t\t\t\t// on the underlying connection. Do this",
"\t\t\t\t\t\t\t\t// outside of the connection synchronization.",
"",
"\t synchronized (getConnectionSynchronization()) {",
"\t\t\t// cache this object and keep returning it",
"\t\t\trMetaData = newEmbedResultSetMetaData(resultDescription);",
"\t }"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedResultSetMetaData.java",
"hunks": [
{
"added": [
" * We take the (Derby) ResultDescription and examine it, to return",
" <P>",
" EmbedResultSetMetaData objects are shared across multiple threads",
" by being stored in the ResultDescription for a compiled plan.",
" If the required api for ResultSetMetaData ever changes so",
" that it has a close() method, a getConnection() method or",
" any other Connection or ResultSet specific method then",
" this sharing must be removed."
],
"header": "@@ -41,12 +41,19 @@ import java.sql.ResultSet;",
"removed": [
" * We take the (cloudscape) ResultDescription and examine it, to return"
]
},
{
"added": [
"\tpublic final int getColumnCount()\t{",
"\t\treturn columnInfo.length;"
],
"header": "@@ -71,8 +78,8 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getColumnCount()\t{",
"\t\treturn columnInfo == null ? 0 : columnInfo.length;"
]
},
{
"added": [
"\tpublic final boolean isAutoIncrement(int column) throws SQLException\t{",
" validColumnNumber(column);"
],
"header": "@@ -83,8 +90,8 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isAutoIncrement(int column) throws SQLException\t{",
""
]
},
{
"added": [
"\tpublic final boolean isCaseSensitive(int column) throws SQLException\t{"
],
"header": "@@ -96,7 +103,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isCaseSensitive(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isSearchable(int column) throws SQLException\t{"
],
"header": "@@ -108,7 +115,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isSearchable(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isCurrency(int column) throws SQLException\t{"
],
"header": "@@ -123,7 +130,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isCurrency(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int isNullable(int column) throws SQLException\t{"
],
"header": "@@ -135,7 +142,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int isNullable(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isSigned(int column) throws SQLException\t{"
],
"header": "@@ -146,7 +153,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isSigned(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int getColumnDisplaySize(int column) throws SQLException\t{"
],
"header": "@@ -158,7 +165,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getColumnDisplaySize(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final String getColumnLabel(int column) throws SQLException {"
],
"header": "@@ -170,7 +177,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getColumnLabel(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final String getColumnName(int column) throws SQLException\t{"
],
"header": "@@ -186,7 +193,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getColumnName(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final String getSchemaName(int column) throws SQLException\t{"
],
"header": "@@ -202,7 +209,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getSchemaName(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int getPrecision(int column) throws SQLException\t{"
],
"header": "@@ -217,7 +224,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getPrecision(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final int getScale(int column) throws SQLException\t{"
],
"header": "@@ -229,7 +236,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getScale(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final String getTableName(int column) throws SQLException {"
],
"header": "@@ -241,7 +248,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getTableName(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final String getCatalogName(int column) throws SQLException {"
],
"header": "@@ -256,7 +263,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getCatalogName(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final int getColumnType(int column) throws SQLException {"
],
"header": "@@ -269,7 +276,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic int getColumnType(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final String getColumnTypeName(int column) throws SQLException\t{"
],
"header": "@@ -281,7 +288,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic String getColumnTypeName(int column) throws SQLException\t{"
]
},
{
"added": [
"\tpublic final boolean isReadOnly(int column) throws SQLException {"
],
"header": "@@ -293,7 +300,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isReadOnly(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final boolean isWritable(int column) throws SQLException {"
],
"header": "@@ -307,7 +314,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isWritable(int column) throws SQLException {"
]
},
{
"added": [
"\tpublic final boolean isDefinitelyWritable(int column) throws SQLException\t{"
],
"header": "@@ -319,7 +326,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic boolean isDefinitelyWritable(int column) throws SQLException\t{"
]
},
{
"added": [
"\tprivate DataTypeDescriptor getColumnTypeDescriptor(int column) throws SQLException "
],
"header": "@@ -337,7 +344,7 @@ public class EmbedResultSetMetaData",
"removed": [
"\tpublic DataTypeDescriptor getColumnTypeDescriptor(int column) throws SQLException "
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/GenericResultDescription.java",
"hunks": [
{
"added": [
"import org.apache.derby.iapi.reference.SQLState;"
],
"header": "@@ -21,6 +21,7 @@",
"removed": []
},
{
"added": [
"import org.apache.derby.iapi.util.ReuseFactory;",
"import org.apache.derby.iapi.util.StringUtil;",
"import java.sql.ResultSetMetaData;",
"import java.util.Collections;",
"import java.util.HashMap;",
"import java.util.Map;",
""
],
"header": "@@ -29,10 +30,17 @@ import org.apache.derby.iapi.services.sanity.SanityManager;",
"removed": []
},
{
"added": [
" ",
" /**",
" * Saved JDBC ResultSetMetaData object.",
" * @see ResultDescription#setMetaData(java.sql.ResultSetMetaData)",
" */",
" private transient ResultSetMetaData metaData;",
" ",
" /**",
" * A map which maps a column name to a column number.",
" * Entries only added when accessing columns with the name.",
" */",
" private Map columnNameMap;"
],
"header": "@@ -61,6 +69,18 @@ public final class GenericResultDescription",
"removed": []
},
{
"added": [
"",
" /**",
" * Set the meta data if it has not already been set.",
" */",
" public synchronized void setMetaData(ResultSetMetaData rsmd) {",
" if (metaData == null)",
" metaData = rsmd;",
" }",
"",
" /**",
" * Get the saved meta data.",
" */",
" public synchronized ResultSetMetaData getMetaData() {",
" return metaData;",
" }",
"",
" /**",
" * Find a column name based upon the JDBC rules for",
" * getXXX and setXXX. Name matching is case-insensitive,",
" * matching the first name (1-based) if there are multiple",
" * columns that map to the same name.",
" */",
" public int findColumnInsenstive(String columnName) {",
" ",
" final Map workMap; ",
" ",
" synchronized (this) {",
" if (columnNameMap==null) {",
" // updateXXX and getXXX methods are case insensitive and the ",
" // first column should be returned. The loop goes backward to ",
" // create a map which preserves this property.",
" Map map = new HashMap();",
" for (int i = getColumnCount(); i>=1; i--) {",
" ",
" final String key = StringUtil.",
" SQLToUpperCase(",
" getColumnDescriptor(i).getName());",
" ",
" final Integer value = ReuseFactory.getInteger(i);",
" ",
" map.put(key, value);",
" }",
" ",
" // Ensure this map can never change.",
" columnNameMap = Collections.unmodifiableMap(map);",
" }",
" workMap = columnNameMap;",
" }",
" ",
" Integer val = (Integer) workMap.get(columnName);",
" if (val==null) {",
" val = (Integer) workMap.get(StringUtil.SQLToUpperCase(columnName));",
" }",
" if (val==null) {",
" return -1;",
" } else {",
" return val.intValue();",
" }",
" }"
],
"header": "@@ -257,5 +277,64 @@ public final class GenericResultDescription",
"removed": []
}
]
}
] |
derby-DERBY-1894-76812041
|
DERBY-1894
contributed by Yip Ng
patch: derby1894-trunk-diff02.txt
The fix is in FromBaseTable's getFromTableByName() method, where it is using
the resolved synonym table name to do the binding for ORDER BY column.
Patch includes additional tests.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@452259 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/compile/FromBaseTable.java",
"hunks": [
{
"added": [
"\t\t\t\t\t\t\t\t\t\t\t origTableName.getFullTableName());"
],
"header": "@@ -2260,7 +2260,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t\t\t\t\t\t\t\t\t\t tableName.getFullTableName());"
]
},
{
"added": [
"\t\t// ourSchemaName can be null if correlation name is specified.",
"\t\tString ourSchemaName = getOrigTableName().getSchemaName();"
],
"header": "@@ -2287,7 +2287,8 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\tString ourSchemaName = tableName.getSchemaName();"
]
},
{
"added": [
"\t\t// e.g.: select w1.i from t1 w1 order by test2.w1.i; (incorrect)",
"\t\t\t// Compare column's schema name with table descriptor's if it is",
"\t\t\t// not a synonym since a synonym can be declared in a different",
"\t\t\t// schema.",
"\t\t\tif (tableName.equals(origTableName) && ",
"\t\t\t\t\t! schemaName.equals(tableDescriptor.getSchemaDescriptor().getSchemaName()))"
],
"header": "@@ -2334,10 +2335,14 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t\t// Compare column's schema name with table descriptor's",
"\t\t\tif (! schemaName.equals(tableDescriptor.getSchemaDescriptor().getSchemaName()))"
]
},
{
"added": [
"\t\t\tif (! getExposedName().equals(getOrigTableName().getTableName()))"
],
"header": "@@ -2349,7 +2354,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\t\tif (! getExposedName().equals(tableName.getTableName()))"
]
},
{
"added": [
"\t\tif (! getExposedName().equals(getOrigTableName().getSchemaName() + \".\" + name))"
],
"header": "@@ -2360,7 +2365,7 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t\tif (! getExposedName().equals(tableName.getSchemaName() + \".\" + name))"
]
},
{
"added": [
"\t * If the tableName is a synonym, it will be resolved here.",
"\t * The original table name is retained in origTableName.",
"\t * "
],
"header": "@@ -2372,7 +2377,9 @@ public class FromBaseTable extends FromTable",
"removed": [
"\t *"
]
}
]
}
] |
derby-DERBY-1909-41fbd581
|
DERBY-1909: ALTER TABLE DROP COLUMN needs to update GRANTed privileges
When ALTER TABLE DROP COLUMN is used to drop a column from a table, it needs to update the GRANTed column privileges on that table.
The core of this proposed patch involves refactoring and reusing the
DERBY-1847 method which knows how to rewrite SYSCOLPERMS rows
to update the COLUMNS column. The DERBY-1847 version of that code
only handled the case of adding a bit to the COLUMNS column; this patch
extends that method to support removing a bit from the COLUMNS
column as well, then calls the method from the AlterTable execution logic.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@503550 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java",
"hunks": [
{
"added": [
"\t{",
"\t\trewriteSYSCOLPERMSforAlterTable(tableID, tc, null);",
"\t}",
"\t/**",
"\t * Update SYSCOLPERMS due to dropping a column from a table.",
"\t *",
"\t * Since ALTER TABLE .. DROP COLUMN .. has removed a column from the",
"\t * table, we need to shrink COLUMNS by removing the corresponding bit",
"\t * position, and shifting all the subsequent bits \"left\" one position.",
"\t *",
"\t * @param tableID\tThe UUID of the table from which a col has been dropped",
"\t * @param tc\t\tTransactionController for the transaction",
"\t * @param columnDescriptor Information about the dropped column",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tpublic void updateSYSCOLPERMSforDropColumn(UUID tableID, ",
"\t\t\tTransactionController tc, ColumnDescriptor columnDescriptor)",
"\t\tthrows StandardException",
"\t{",
"\t\trewriteSYSCOLPERMSforAlterTable(tableID, tc, columnDescriptor);",
"\t}",
"\t/**",
"\t * Workhorse for ALTER TABLE-driven mods to SYSCOLPERMS",
"\t *",
"\t * This method finds all the SYSCOLPERMS rows for this table. Then it",
"\t * iterates through each row, either adding a new column to the end of",
"\t * the table, or dropping a column from the table, as appropriate. It",
"\t * updates each SYSCOLPERMS row to store the new COLUMNS value.",
"\t *",
"\t * @param tableID\tThe UUID of the table being altered",
"\t * @param tc\t\tTransactionController for the transaction",
"\t * @param columnDescriptor Dropped column info, or null if adding",
"\t *",
"\t * @exception StandardException\t\tThrown on error",
"\t */",
"\tprivate void rewriteSYSCOLPERMSforAlterTable(UUID tableID,",
"\t\t\tTransactionController tc, ColumnDescriptor columnDescriptor)",
"\t\tthrows StandardException"
],
"header": "@@ -2363,6 +2363,45 @@ public final class\tDataDictionaryImpl",
"removed": []
},
{
"added": [
"\t\tin SYSCOLPERMS and adjust the \"COLUMNS\" column in SYSCOLPERMS to ",
"\t\taccomodate the added or dropped column in the tableid*/"
],
"header": "@@ -2395,8 +2434,8 @@ public final class\tDataDictionaryImpl",
"removed": [
"\t\tin SYSCOLPERMS and expand the \"COLUMNS\" column in SYSCOLPERMS to ",
"\t\taccomodate the newly added column to the tableid*/"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [],
"header": "@@ -673,17 +673,6 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t * Currently, column privileges are not repaired when",
"\t * dropping a column. This is bug DERBY-1909, and for the",
"\t * time being we simply reject DROP COLUMN if it is specified",
"\t * when sqlAuthorization is true (that check occurs in the",
"\t * parser, not here). When DERBY-1909 is fixed:",
"\t * - Update this comment",
"\t * - Remove the check in dropColumnDefinition() in the parser",
"\t * - consolidate all the tests in altertableDropColumn.sql",
"\t * back into altertable.sql and remove the separate",
"\t * altertableDropColumn files",
"\t * "
]
}
]
}
] |
derby-DERBY-1909-ee5857f3
|
DERBY-1489: Provide ALTER TABLE DROP COLUMN functionality
This patch provides support for ALTER TABLE t DROP COLUMN c.
The patch modifies the SQL parser so that it supports statements of the form:
ALTER TABLE t DROP [COLUMN] c [CASCADE|RESTRICT]
If you don't specify CASCADE or RESTRICT, the default is CASCADE.
If you specify RESTRICT, then the column drop will be rejected if it would
cause a dependent view, trigger, check constraint, unique constraint,
foreign key constraint, or primary key constraint to become invalid.
Currently, column privileges are not properly adjusted when dropping a
column. This is bug DERBY-1909, and for now we simply reject DROP COLUMN
if it is specified when sqlAuthorization is true. When DERBY-1909 is fixed,
the tests in altertableDropColumn.sql should be merged into altertable.sql,
and altertableDropColumn.sql (and .out) should be removed.
This new feature is currently undocumented. DERBY-1926 tracks the documentation
changes necessary to document this feature.
The execution logic for ALTER TABLE DROP COLUMN is in AlterTableConstantAction,
and was not substantially modified by this change. The primary changes to
that existing code were:
- to hook RESTRICT processing up to the dependency manager so that
dependent view processing was sensitive to whether the user
had specified CASCADE or RESTRICT
- to reread the table descriptor from the catalogs after dropping all the
dependent schema objects and before compressing the table, so that the
proper scheman information was used during the compress.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@453420 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/SPSDescriptor.java",
"hunks": [
{
"added": [
"\t\t\tcase DependencyManager.DROP_COLUMN_RESTRICT:"
],
"header": "@@ -908,6 +908,7 @@ public class SPSDescriptor extends TupleDescriptor",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/ViewDescriptor.java",
"hunks": [
{
"added": [
"\t\t case DependencyManager.DROP_COLUMN:"
],
"header": "@@ -248,6 +248,7 @@ public final class ViewDescriptor extends TupleDescriptor",
"removed": []
},
{
"added": [
"\t\t // DROP_COLUMN_RESTRICT is similar. Any case which arrives",
"\t\t // at this default: statement causes the exception to be",
"\t\t // thrown, indicating that the DDL modification should be",
"\t\t // rejected because a view is dependent on the underlying",
"\t\t // object (table, column, privilege, etc.)"
],
"header": "@@ -282,6 +283,11 @@ public final class ViewDescriptor extends TupleDescriptor",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"\t * This routine drops a column from a table, taking care",
"\t * to properly handle the various related schema objects.",
"\t * ",
"\t * The syntax which gets you here is:",
"\t * ",
"\t * ALTER TABLE tbl DROP [COLUMN] col [CASCADE|RESTRICT]",
"\t * ",
"\t * The keyword COLUMN is optional, and if you don't",
"\t * specify CASCADE or RESTRICT, the default is CASCADE",
"\t * (the default is chosen in the parser, not here).",
"\t * ",
"\t * If you specify RESTRICT, then the column drop should be",
"\t * rejected if it would cause a dependent schema object",
"\t * to become invalid.",
"\t * ",
"\t * If you specify CASCADE, then the column drop should",
"\t * additionally drop other schema objects which have",
"\t * become invalid.",
"\t * ",
"\t * You may not drop the last (only) column in a table.",
"\t * ",
"\t * Schema objects of interest include:",
"\t * - views",
"\t * - triggers",
"\t * - constraints",
"\t * - check constraints",
"\t * - primary key constraints",
"\t * - foreign key constraints",
"\t * - unique key constraints",
"\t * - not null constraints",
"\t * - privileges",
"\t * - indexes",
"\t * - default values",
"\t * ",
"\t * Dropping a column may also change the column position",
"\t * numbers of other columns in the table, which may require",
"\t * fixup of schema objects (such as triggers and column",
"\t * privileges) which refer to columns by column position number.",
"\t * ",
"\t * Currently, column privileges are not repaired when",
"\t * dropping a column. This is bug DERBY-1909, and for the",
"\t * time being we simply reject DROP COLUMN if it is specified",
"\t * when sqlAuthorization is true (that check occurs in the",
"\t * parser, not here). When DERBY-1909 is fixed:",
"\t * - Update this comment",
"\t * - Remove the check in dropColumnDefinition() in the parser",
"\t * - consolidate all the tests in altertableDropColumn.sql",
"\t * back into altertable.sql and remove the separate",
"\t * altertableDropColumn files",
"\t * ",
"\t * Indexes are a bit interesting. The official SQL spec",
"\t * doesn't talk about indexes; they are considered to be",
"\t * an imlementation-specific performance optimization.",
"\t * The current Derby behavior is that:",
"\t * - CASCADE/RESTRICT doesn't matter for indexes",
"\t * - when a column is dropped, it is removed from any indexes",
"\t * which contain it.",
"\t * - if that column was the only column in the index, the",
"\t * entire index is dropped. ",
"\t *",
" * @param activation the current activation"
],
"header": "@@ -663,6 +663,67 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": []
},
{
"added": [
"\t\tdm.invalidateFor(td, ",
" (cascade ? DependencyManager.DROP_COLUMN",
" : DependencyManager.DROP_COLUMN_RESTRICT),",
" lcc);"
],
"header": "@@ -711,7 +772,10 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tdm.invalidateFor(td, DependencyManager.DROP_COLUMN, lcc);"
]
},
{
"added": [
"\t\t\t\t// Reject the DROP COLUMN, because there exists a constraint",
"\t\t\t\t// which references this column.",
"\t\t\t\t//",
"\t\t\t\tthrow StandardException.newException(SQLState.LANG_PROVIDER_HAS_DEPENDENT_OBJECT,"
],
"header": "@@ -812,13 +876,13 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\t\t\tif (numRefCols > 1 || cd.getConstraintType() == DataDictionary.PRIMARYKEY_CONSTRAINT)",
"\t\t\t\t{",
"\t\t\t\t\tthrow StandardException.newException(SQLState.LANG_PROVIDER_HAS_DEPENDENT_OBJECT,",
"\t\t\t\t}"
]
}
]
}
] |
derby-DERBY-1913-79366f5f
|
DERBY-1913 storetests/st_reclaim_longcol.java fails intermittently
disabling test2, still machine dependent. test1 covers the original intended
code path to verify that blobs are marked for post commit immediately, rather
than waiting for all rows on a page to be deleted.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1242889 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1913-cf5ac0cc
|
DERBY-1913 storetests/st_reclaim_longcol.java fails intermittently
The test was counting on being able to control the number of FREE pages,
but the number is very dependent on ability of background thread to run
in a timely manner. Changed the test to check number of allocated pages,
which at least correctly tests that what we think should be background reclaimed
eventually is. Still needs some wait logic which I think will work better
now. I tested against knut's patch to delay the daemon 1 second for every
piece of work and the old test always failed and the new test succeeded.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1242620 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1914-2d14fe72
|
DERBY-1914 test lang/wisconsin gives garbage output on zOS
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@1355569 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1917-0aceaa9f
|
DERBY-1917: Clob.position fails with search strings longer than 256 chars
This patch was contributed by V. Narayanan ([email protected])
The position algorithm proceeds in a chunked fashion, searching for 256
byte chunks of the search string at a time. The chunking algorithm contained
two flaws:
- tmpPatternS = searchStr.substring(patternIndex, 256);
+ tmpPatternS = searchStr.substring(patternIndex , patternIndex + 256);
searchStr.substring(patternIndex , patternIndex + 256); has to actually
return 256 characters starting from patternIndex. This was resulting in
an empty string being returned when the string length exceeded 256.
- tmpPatternS = searchStr;
+ tmpPatternS = searchStr.substring(patternIndex , patternLength);
Assume that the string length is 258 then in the first iteration it
returned 256. In the the second it was returning the whole string instead
of the remaining two characters. Doing a tmpPatternS =
searchStr.substring(patternIndex , patternLength); corrected this problem.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@493262 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedClob.java",
"hunks": [
{
"added": [
"\t* begins at position <code>start</code>. The method uses the following",
"\t* algorithm for the search",
"\t*",
"\t*",
"\t* 1)Is the length of the current pattern string to be matched greater than 256 ?",
"\t*",
"\t*\t1.1)If \"YES\"",
"\t*\t\tExtract the first 256 bytes as the current pattern to be matched",
"\t*",
"\t*\t\tIf \"NO\"",
"\t*\t\tMake the pattern string itself as the current pattern to be matched",
"\t*",
"\t*\t1.2)Initialize a variable that will indicate the character in the pattern",
"\t*\t\tString being matched to zero. (say currPatternPos)",
"\t*",
"\t* 2)Read the 256 bytes of the Clob from the database",
"\t*",
"\t*\t2.1)Initialize a variable that will indicate the current index in this array",
"\t*\t\tto zero. (say currClobPos)",
"\t*\t2.2)Exit if there are no more characters to be read in the Clob",
"\t*",
"\t* 3)Initialize a bestMatchPosition that will keep storing the next occurence of the ",
"\t*\tfirst character in the pattern.This will be useful when we want to go back and ",
"\t*\tstart searching in the Clob array when a mismatch occurs.",
"\t*",
"\t* 4)Do the characters in currPatternPos and currClobPos match ?",
"\t*\t4.1)If \"YES\" ",
"\t*",
"\t*\t\tIncrement currPatternPos and currClobPos. ",
"\t*",
"\t*\t\tIf currPatternPos is not 0 and the character in the ",
"\t*\t\tcurrentClobPos is the same as the first character in the",
"\t*\t\tpattern set bestMatchPosition = currentClobPos",
"\t*",
"\t*\t4.2)If \"No\" ",
"\t*",
"\t*\t\tset currClobPos = bestMatchPosition",
"\t*\t\tset currPatternPos = 0",
"\t*",
"\t*\t4.3)If currPatternPos > 256 ",
"\t*\t\t4.3.1)If \"YES\" ",
"\t*\t\t\t Return the current position in the Clob if all characters ",
"\t*\t\t\t have been matched otherwise perform step 1 to fetch the",
"\t*\t\t\t next 256 characters and increment matchCount",
"\t*\t\t4.3.2)If \"NO\" repeat Step 4",
"\t*",
"\t*\t4.4)If currClobPos > 256",
"\t*\t\t4.4.1)If \"YES\"",
"\t*\t\t\t Repeat step 2 to fetch next 256 characters",
"\t*\t\t4.4.2)If \"NO\"",
"\t*\t\t\t Repeat step 4"
],
"header": "@@ -371,7 +371,57 @@ final class EmbedClob extends ConnectionChild implements Clob",
"removed": [
" * begins at position <code>start</code>."
]
},
{
"added": [
"\t\t\t\t\t\t\t\t//Keep extracting substrings of length 256 from the pattern string",
"\t\t\t\t\t\t\t\t//and use these substrings for comparison with the data from the Clob",
"\t\t\t\t\t\t\t\t//if the subString remaining has a length > 256 then extract 256 bytes",
"\t\t\t\t\t\t\t\t//and return it",
"\t\t\t\t\t\t\t\t//otherwise return the remaining string ",
"\t\t\t\t\t\t\t\t\ttmpPatternS = searchStr.substring(patternIndex , patternIndex + 256);",
"\t\t\t\t\t\t\t\t\ttmpPatternS = searchStr.substring(patternIndex , patternLength);"
],
"header": "@@ -459,10 +509,15 @@ search:",
"removed": [
"\t\t\t\t\t\t\t\t\ttmpPatternS = searchStr.substring(patternIndex, 256);",
"\t\t\t\t\t\t\t\t\ttmpPatternS = searchStr;"
]
}
]
}
] |
derby-DERBY-1925-5110c0a4
|
DERBY-1925 : (Add re-encrytion of database test cases to the upgrade test.)
Merged fix (r452682) from 10.2 branch to trunk.
This patch adds test cases to the upgrade test to test encryption of an
un-encrypted database and re-encryption of encrypted database.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@466279 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1926-ee5857f3
|
DERBY-1489: Provide ALTER TABLE DROP COLUMN functionality
This patch provides support for ALTER TABLE t DROP COLUMN c.
The patch modifies the SQL parser so that it supports statements of the form:
ALTER TABLE t DROP [COLUMN] c [CASCADE|RESTRICT]
If you don't specify CASCADE or RESTRICT, the default is CASCADE.
If you specify RESTRICT, then the column drop will be rejected if it would
cause a dependent view, trigger, check constraint, unique constraint,
foreign key constraint, or primary key constraint to become invalid.
Currently, column privileges are not properly adjusted when dropping a
column. This is bug DERBY-1909, and for now we simply reject DROP COLUMN
if it is specified when sqlAuthorization is true. When DERBY-1909 is fixed,
the tests in altertableDropColumn.sql should be merged into altertable.sql,
and altertableDropColumn.sql (and .out) should be removed.
This new feature is currently undocumented. DERBY-1926 tracks the documentation
changes necessary to document this feature.
The execution logic for ALTER TABLE DROP COLUMN is in AlterTableConstantAction,
and was not substantially modified by this change. The primary changes to
that existing code were:
- to hook RESTRICT processing up to the dependency manager so that
dependent view processing was sensitive to whether the user
had specified CASCADE or RESTRICT
- to reread the table descriptor from the catalogs after dropping all the
dependent schema objects and before compressing the table, so that the
proper scheman information was used during the compress.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@453420 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/SPSDescriptor.java",
"hunks": [
{
"added": [
"\t\t\tcase DependencyManager.DROP_COLUMN_RESTRICT:"
],
"header": "@@ -908,6 +908,7 @@ public class SPSDescriptor extends TupleDescriptor",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/sql/dictionary/ViewDescriptor.java",
"hunks": [
{
"added": [
"\t\t case DependencyManager.DROP_COLUMN:"
],
"header": "@@ -248,6 +248,7 @@ public final class ViewDescriptor extends TupleDescriptor",
"removed": []
},
{
"added": [
"\t\t // DROP_COLUMN_RESTRICT is similar. Any case which arrives",
"\t\t // at this default: statement causes the exception to be",
"\t\t // thrown, indicating that the DDL modification should be",
"\t\t // rejected because a view is dependent on the underlying",
"\t\t // object (table, column, privilege, etc.)"
],
"header": "@@ -282,6 +283,11 @@ public final class ViewDescriptor extends TupleDescriptor",
"removed": []
}
]
},
{
"file": "java/engine/org/apache/derby/impl/sql/execute/AlterTableConstantAction.java",
"hunks": [
{
"added": [
"\t * This routine drops a column from a table, taking care",
"\t * to properly handle the various related schema objects.",
"\t * ",
"\t * The syntax which gets you here is:",
"\t * ",
"\t * ALTER TABLE tbl DROP [COLUMN] col [CASCADE|RESTRICT]",
"\t * ",
"\t * The keyword COLUMN is optional, and if you don't",
"\t * specify CASCADE or RESTRICT, the default is CASCADE",
"\t * (the default is chosen in the parser, not here).",
"\t * ",
"\t * If you specify RESTRICT, then the column drop should be",
"\t * rejected if it would cause a dependent schema object",
"\t * to become invalid.",
"\t * ",
"\t * If you specify CASCADE, then the column drop should",
"\t * additionally drop other schema objects which have",
"\t * become invalid.",
"\t * ",
"\t * You may not drop the last (only) column in a table.",
"\t * ",
"\t * Schema objects of interest include:",
"\t * - views",
"\t * - triggers",
"\t * - constraints",
"\t * - check constraints",
"\t * - primary key constraints",
"\t * - foreign key constraints",
"\t * - unique key constraints",
"\t * - not null constraints",
"\t * - privileges",
"\t * - indexes",
"\t * - default values",
"\t * ",
"\t * Dropping a column may also change the column position",
"\t * numbers of other columns in the table, which may require",
"\t * fixup of schema objects (such as triggers and column",
"\t * privileges) which refer to columns by column position number.",
"\t * ",
"\t * Currently, column privileges are not repaired when",
"\t * dropping a column. This is bug DERBY-1909, and for the",
"\t * time being we simply reject DROP COLUMN if it is specified",
"\t * when sqlAuthorization is true (that check occurs in the",
"\t * parser, not here). When DERBY-1909 is fixed:",
"\t * - Update this comment",
"\t * - Remove the check in dropColumnDefinition() in the parser",
"\t * - consolidate all the tests in altertableDropColumn.sql",
"\t * back into altertable.sql and remove the separate",
"\t * altertableDropColumn files",
"\t * ",
"\t * Indexes are a bit interesting. The official SQL spec",
"\t * doesn't talk about indexes; they are considered to be",
"\t * an imlementation-specific performance optimization.",
"\t * The current Derby behavior is that:",
"\t * - CASCADE/RESTRICT doesn't matter for indexes",
"\t * - when a column is dropped, it is removed from any indexes",
"\t * which contain it.",
"\t * - if that column was the only column in the index, the",
"\t * entire index is dropped. ",
"\t *",
" * @param activation the current activation"
],
"header": "@@ -663,6 +663,67 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": []
},
{
"added": [
"\t\tdm.invalidateFor(td, ",
" (cascade ? DependencyManager.DROP_COLUMN",
" : DependencyManager.DROP_COLUMN_RESTRICT),",
" lcc);"
],
"header": "@@ -711,7 +772,10 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\tdm.invalidateFor(td, DependencyManager.DROP_COLUMN, lcc);"
]
},
{
"added": [
"\t\t\t\t// Reject the DROP COLUMN, because there exists a constraint",
"\t\t\t\t// which references this column.",
"\t\t\t\t//",
"\t\t\t\tthrow StandardException.newException(SQLState.LANG_PROVIDER_HAS_DEPENDENT_OBJECT,"
],
"header": "@@ -812,13 +876,13 @@ class AlterTableConstantAction extends DDLSingleTableConstantAction",
"removed": [
"\t\t\t\tif (numRefCols > 1 || cd.getConstraintType() == DataDictionary.PRIMARYKEY_CONSTRAINT)",
"\t\t\t\t{",
"\t\t\t\t\tthrow StandardException.newException(SQLState.LANG_PROVIDER_HAS_DEPENDENT_OBJECT,",
"\t\t\t\t}"
]
}
]
}
] |
derby-DERBY-1931-6c248652
|
DERBY-1931: Derby JAR files should be grouped as a single library in Package Explorer
Contributed by Aaron Tarter
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@581971 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "plugins/eclipse/org.apache.derby.ui/src/org/apache/derby/ui/popup/actions/AddDerbyNature.java",
"hunks": [
{
"added": [
"import java.util.ArrayList;",
"import java.util.List;",
"",
"import org.apache.derby.ui.container.DerbyClasspathContainer;"
],
"header": "@@ -21,9 +21,12 @@",
"removed": [
"import org.apache.derby.ui.util.DerbyUtils;"
]
}
]
},
{
"file": "plugins/eclipse/org.apache.derby.ui/src/org/apache/derby/ui/popup/actions/RemoveDerbyNature.java",
"hunks": [
{
"added": [
"import java.util.List;",
"import org.apache.derby.ui.container.DerbyClasspathContainer;"
],
"header": "@@ -21,11 +21,12 @@",
"removed": [
"import org.apache.derby.ui.util.DerbyUtils;"
]
}
]
},
{
"file": "plugins/eclipse/org.apache.derby.ui/src/org/apache/derby/ui/util/DerbyUtils.java",
"hunks": [
{
"added": [],
"header": "@@ -21,7 +21,6 @@",
"removed": [
"import java.io.File;"
]
},
{
"added": [],
"header": "@@ -32,7 +31,6 @@ import org.apache.derby.ui.properties.DerbyProperties;",
"removed": [
"import org.eclipse.core.runtime.FileLocator;"
]
},
{
"added": [],
"header": "@@ -44,8 +42,6 @@ import org.eclipse.debug.core.ILaunchConfigurationType;",
"removed": [
"import org.eclipse.jdt.core.IClasspathEntry;",
"import org.eclipse.jdt.core.JavaCore;"
]
},
{
"added": [],
"header": "@@ -57,120 +53,11 @@ import org.osgi.framework.Constants;",
"removed": [
"\tprivate final static String PLUGIN_ROOT = \"ECLIPSE_HOME/plugins/\";",
"\tpublic static IClasspathEntry[] addDerbyJars(IClasspathEntry[] rawCP) throws Exception{",
"\t\t",
"\t\tIClasspathEntry[] newRawCP= null;",
"\t\ttry{",
"\t\t\t//New OSGI way",
"\t\t\tManifestElement[] elements_core, elements_ui;",
"\t\t\telements_core = getElements(CommonNames.CORE_PATH);",
"\t\t\telements_ui=getElements(CommonNames.UI_PATH);",
"\t\t\t",
"\t\t\tBundle bundle=Platform.getBundle(CommonNames.CORE_PATH);",
"\t\t\tURL pluginURL = FileLocator.resolve(FileLocator.find(bundle, new Path(\"/\"), null));",
"\t\t\tString pluginName = new File(pluginURL.getPath()).getName();",
"",
"\t\t\tnewRawCP=new IClasspathEntry[rawCP.length + (elements_core.length) + (elements_ui.length-1)];",
"\t\t\tSystem.arraycopy(rawCP, 0, newRawCP, 0, rawCP.length);",
"\t\t\t",
"\t\t\t//Add the CORE jars",
"\t\t\tint oldLength=rawCP.length;",
"\t\t\tfor(int i=0;i<elements_core.length;i++){",
"\t\t\t\t// add JAR as var type entry relative to the eclipse plugins dir, so the entry is portable ",
"\t\t\t\tnewRawCP[oldLength+i]=JavaCore.newVariableEntry(new Path(PLUGIN_ROOT+pluginName+\"/\"+elements_core[i].getValue()), null, null);\t\t\t\t",
"\t\t\t\t",
"\t\t\t}",
"\t\t\t // Add the UI jars",
"\t\t\tbundle=Platform.getBundle(CommonNames.UI_PATH);",
"\t\t\tpluginURL = FileLocator.resolve(FileLocator.find(bundle, new Path(\"/\"), null));",
"\t\t\tpluginName = new File(pluginURL.getPath()).getName();",
"\t\t\toldLength=oldLength+elements_core.length -1; ",
"\t\t\tfor(int i=0;i<elements_ui.length;i++){",
"\t\t\t\tif(!(elements_ui[i].getValue().toLowerCase().equals(\"ui.jar\"))){",
"\t\t\t\t\t// add JAR as var type entry relative to the eclipse plugins dir, so the entry is portable",
"\t\t\t\t\tnewRawCP[oldLength+i]=JavaCore.newVariableEntry(new Path(PLUGIN_ROOT+pluginName+\"/\"+elements_ui[i].getValue()), null, null);",
"\t\t\t\t}",
"\t\t\t}\t\t\t\t\t",
"\t\t\treturn newRawCP;",
"\t\t}catch(Exception e){",
"\t\t\tthrow e;",
"\t\t}",
"\t\t",
"\t}",
"\tpublic static IClasspathEntry[] removeDerbyJars(IClasspathEntry[] rawCP) throws Exception{",
"\t\tArrayList arrL=new ArrayList();",
"\t\tfor (int i=0;i<rawCP.length;i++){",
"\t\t\tarrL.add(rawCP[i]);",
"\t\t}",
"\t\tIClasspathEntry[] newRawCP= null;",
"\t\ttry{",
"\t\t\tManifestElement[] elements_core, elements_ui;",
"\t\t\telements_core = getElements(CommonNames.CORE_PATH);",
"\t\t\telements_ui=getElements(CommonNames.UI_PATH);",
"\t\t\t",
"\t\t\tBundle bundle;",
"\t\t\tURL pluginURL,jarURL,localURL;",
"",
"\t\t\tboolean add;",
"\t\t\tIClasspathEntry icp=null;",
"\t\t\tfor (int j=0;j<arrL.size();j++){",
"\t\t\t\tbundle=Platform.getBundle(CommonNames.CORE_PATH);",
"\t\t\t\tpluginURL = bundle.getEntry(\"/\");",
"\t\t\t\tadd=true;",
"\t\t\t\ticp=(IClasspathEntry)arrL.get(j);",
"\t\t\t\t//remove 'core' jars",
"\t\t\t\tfor (int i=0;i<elements_core.length;i++){",
"\t\t\t\t\tjarURL= new URL(pluginURL,elements_core[i].getValue());",
"\t\t\t\t\tlocalURL=Platform.asLocalURL(jarURL);",
"\t\t\t\t\tif(((icp).equals(JavaCore.newLibraryEntry(new Path(localURL.getPath()), null, null)))||",
"\t\t\t\t\t\t\ticp.getPath().toString().toLowerCase().endsWith(\"derby.jar\")||",
"\t\t\t\t\t\t\ticp.getPath().toString().toLowerCase().endsWith(\"derbynet.jar\")||",
"\t\t\t\t\t\t\ticp.getPath().toString().toLowerCase().endsWith(\"derbyclient.jar\")||",
"\t\t\t\t\t\t\ticp.getPath().toString().toLowerCase().endsWith(\"derbytools.jar\")){",
"\t\t\t\t\t\tadd=false;",
"\t\t\t\t\t}",
"\t\t\t\t}",
"\t\t\t\tif(!add){",
"\t\t\t\t\tarrL.remove(j);",
"\t\t\t\t\tj=j-1;",
"\t\t\t\t}",
"\t\t\t\t//REMOVE 'ui' jars",
"\t\t\t\tbundle=Platform.getBundle(CommonNames.UI_PATH);",
"\t\t\t\tpluginURL = bundle.getEntry(\"/\");",
"\t\t\t\tadd=true;",
"\t\t\t\t",
"\t\t\t\tfor (int i=0;i<elements_ui.length;i++){",
"\t\t\t\t\tif(!(elements_ui[i].getValue().toLowerCase().equals(\"ui.jar\"))){",
"\t\t\t\t\t\tjarURL= new URL(pluginURL,elements_ui[i].getValue());",
"\t\t\t\t\t\tlocalURL=Platform.asLocalURL(jarURL);\t\t\t\t\t",
"\t\t\t\t\t\tif((icp).equals(JavaCore.newLibraryEntry(new Path(localURL.getPath()), null, null))){",
"\t\t\t\t\t\t\tadd=false;",
"\t\t\t\t\t\t}",
"\t\t\t\t\t}",
"\t\t\t\t}",
"\t\t\t\tif(!add){",
"\t\t\t\t\tarrL.remove(j);",
"\t\t\t\t\tj=j-1;",
"\t\t\t\t}",
"\t\t\t}",
"\t\t\tnewRawCP=new IClasspathEntry[arrL.size()];",
"\t\t\tfor (int i=0;i<arrL.size();i++){",
"\t\t\t\tnewRawCP[i]=(IClasspathEntry)arrL.get(i);",
"\t\t\t}",
"\t\t\treturn newRawCP;",
"\t\t}catch(Exception e){",
"\t\t\te.printStackTrace();",
"\t\t\t//return rawCP;",
"\t\t\tthrow e;",
"\t\t}",
"\t\t",
"\t}"
]
}
]
}
] |
derby-DERBY-1938-a92196cc
|
DERBY-1938: Add support for setObject(<arg>, null)
Allow calling the two-argument PreparedStatement.setObject method with null
to set a column value in the database to SQL NULL. The recommended way for
maximum portability is to use the three-argument setObject method or the
setNull method.
Patch file: derby-1938-1b-reworked_patch.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@995089 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1939-801c5156
|
DERBY-1939
Bug was already fixed in trunk, merging added tests and new sanity
check from 10.1 codeline to trunk.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@454623 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/iapi/store/access/DiskHashtable.java",
"hunks": [
{
"added": [],
"header": "@@ -18,7 +18,6 @@",
"removed": [
""
]
},
{
"added": [
"import org.apache.derby.iapi.services.sanity.SanityManager;",
" * This class is used by BackingStoreHashtable when the BackingStoreHashtable ",
" * must spill to disk. It implements the methods of a hash table: put, get, ",
" * remove, elements, however it is not implemented as a hash table. In order to",
" * minimize the amount of unique code it is implemented using a Btree and a ",
" * heap conglomerate. The Btree indexes the hash code of the row key. The ",
" * actual key may be too long for our Btree implementation."
],
"header": "@@ -29,17 +28,18 @@ import org.apache.derby.iapi.error.StandardException;",
"removed": [
"import org.apache.derby.impl.store.access.heap.HeapRowLocation;",
" * This class is used by BackingStoreHashtable when the BackingStoreHashtable must spill to disk.",
" * It implements the methods of a hash table: put, get, remove, elements, however it is not implemented",
" * as a hash table. In order to minimize the amount of unique code it is implemented using a Btree and a heap",
" * conglomerate. The Btree indexes the hash code of the row key. The actual key may be too long for",
" * our Btree implementation."
]
},
{
"added": [
" private final long rowConglomerateId;",
" private ConglomerateController rowConglomerate;",
" private final long btreeConglomerateId;",
" private ConglomerateController btreeConglomerate;",
" private final DataValueDescriptor[] btreeRow;",
" private final int[] key_column_numbers;",
" private final boolean remove_duplicates;",
" private final TransactionController tc;",
" private final DataValueDescriptor[] row;",
" private final DataValueDescriptor[] scanKey = { new SQLInteger()};",
" private int size;",
" private boolean keepStatistics;",
" * @param template An array of DataValueDescriptors that ",
" * serves as a template for the rows.",
" * @param key_column_numbers The indexes of the key columns (0 based)",
" * @param remove_duplicates If true then rows with duplicate keys are ",
" * removed.",
" * @param keepAfterCommit If true then the hash table is kept after ",
" * a commit",
" public DiskHashtable( ",
" TransactionController tc,",
" DataValueDescriptor[] template,",
" int[] key_column_numbers,",
" boolean remove_duplicates,",
" boolean keepAfterCommit)",
" this.tc = tc;",
" this.key_column_numbers = key_column_numbers;",
" this.remove_duplicates = remove_duplicates;",
" LanguageConnectionContext lcc = (LanguageConnectionContext)",
" ContextService.getContextOrNull(",
" LanguageConnectionContext.CONTEXT_ID);",
"",
"",
" // Create template row used for creating the conglomerate and ",
" // fetching rows.",
" row = new DataValueDescriptor[template.length];",
" {",
"",
" if (SanityManager.DEBUG)",
" {",
" // must have an object template for all cols in hash overflow.",
" SanityManager.ASSERT(",
" row[i] != null, ",
" \"Template for the hash table must have non-null object\");",
" }",
" }",
"",
" int tempFlags = ",
" keepAfterCommit ? ",
" (TransactionController.IS_TEMPORARY | ",
" TransactionController.IS_KEPT) : ",
" TransactionController.IS_TEMPORARY;",
" // create the \"base\" table of the hash overflow.",
" rowConglomerateId = ",
" tc.createConglomerate( ",
" \"heap\",",
" template,",
" (ColumnOrdering[]) null,",
" (Properties) null,",
" tempFlags);",
"",
" // open the \"base\" table of the hash overflow.",
" rowConglomerate = ",
" tc.openConglomerate( ",
" rowConglomerateId,",
" keepAfterCommit,",
" TransactionController.OPENMODE_FORUPDATE,",
" TransactionController.MODE_TABLE,",
" TransactionController.ISOLATION_NOLOCK/* Single thread only */);",
"",
" // create the index on the \"hash\" base table. The key of the index",
" // is the hash code of the row key. The second column is the ",
" // RowLocation of the row in the \"base\" table of the hash overflow.",
" btreeRow = ",
" new DataValueDescriptor[] ",
" { new SQLInteger(), rowConglomerate.newRowLocationTemplate()};",
"",
"",
" btreeProps.put(\"baseConglomerateId\", ",
" String.valueOf(rowConglomerateId));",
" btreeProps.put(\"rowLocationColumn\", ",
" \"1\");",
" btreeProps.put(\"allowDuplicates\", ",
" \"false\"); // Because the row location is part of the key",
" btreeProps.put(\"nKeyFields\", ",
" \"2\"); // Include the row location column",
" btreeProps.put(\"nUniqueColumns\", ",
" \"2\"); // Include the row location column",
" btreeProps.put(\"maintainParentLinks\", ",
" \"false\");",
" btreeConglomerateId = ",
" tc.createConglomerate( ",
" \"BTREE\",",
" btreeRow,",
" (ColumnOrdering[]) null,",
" btreeProps,",
" tempFlags);",
"",
" // open the \"index\" of the hash overflow.",
" btreeConglomerate = ",
" tc.openConglomerate( ",
" btreeConglomerateId,",
" keepAfterCommit,",
" TransactionController.OPENMODE_FORUPDATE,",
" TransactionController.MODE_TABLE,",
" TransactionController.ISOLATION_NOLOCK /*Single thread only*/ );",
""
],
"header": "@@ -49,77 +49,126 @@ import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;",
"removed": [
" private final long rowConglomerateId;",
" private ConglomerateController rowConglomerate;",
" private final long btreeConglomerateId;",
" private ConglomerateController btreeConglomerate;",
" private final DataValueDescriptor[] btreeRow;",
" private final int[] key_column_numbers;",
" private final boolean remove_duplicates;",
" private final TransactionController tc;",
" private final DataValueDescriptor[] row;",
" private final DataValueDescriptor[] scanKey = { new SQLInteger()};",
" private int size;",
" private boolean keepStatistics;",
" * @param template An array of DataValueDescriptors that serves as a template for the rows.",
" * @param key_column_numbers The indexes of the key columns (0 based)",
" * @param remove_duplicates If true then rows with duplicate keys are removed",
" * @param keepAfterCommit If true then the hash table is kept after a commit",
" public DiskHashtable( TransactionController tc,",
" DataValueDescriptor[] template,",
" int[] key_column_numbers,",
" boolean remove_duplicates,",
" boolean keepAfterCommit)",
" this.tc = tc;",
" this.key_column_numbers = key_column_numbers;",
" this.remove_duplicates = remove_duplicates;",
" LanguageConnectionContext lcc = (LanguageConnectionContext)",
"\t\t\t\tContextService.getContextOrNull(LanguageConnectionContext.CONTEXT_ID);",
" row = new DataValueDescriptor[ template.length];",
" int tempFlags = keepAfterCommit ? (TransactionController.IS_TEMPORARY | TransactionController.IS_KEPT)",
" : TransactionController.IS_TEMPORARY;",
" rowConglomerateId = tc.createConglomerate( \"heap\",",
" template,",
" (ColumnOrdering[]) null,",
" (Properties) null,",
" tempFlags);",
" rowConglomerate = tc.openConglomerate( rowConglomerateId,",
" keepAfterCommit,",
" TransactionController.OPENMODE_FORUPDATE,",
" TransactionController.MODE_TABLE,",
" TransactionController.ISOLATION_NOLOCK /* Single thread only */ );",
"",
" btreeRow = new DataValueDescriptor[] { new SQLInteger(), rowConglomerate.newRowLocationTemplate()};",
" btreeProps.put( \"baseConglomerateId\", String.valueOf( rowConglomerateId));",
" btreeProps.put( \"rowLocationColumn\", \"1\");",
" btreeProps.put( \"allowDuplicates\", \"false\"); // Because the row location is part of the key",
" btreeProps.put( \"nKeyFields\", \"2\"); // Include the row location column",
" btreeProps.put( \"nUniqueColumns\", \"2\"); // Include the row location column",
" btreeProps.put( \"maintainParentLinks\", \"false\");",
" btreeConglomerateId = tc.createConglomerate( \"BTREE\",",
" btreeRow,",
" (ColumnOrdering[]) null,",
" btreeProps,",
" tempFlags);",
"",
" btreeConglomerate = tc.openConglomerate( btreeConglomerateId,",
" keepAfterCommit,",
" TransactionController.OPENMODE_FORUPDATE,",
" TransactionController.MODE_TABLE,",
" TransactionController.ISOLATION_NOLOCK /* Single thread only */ );"
]
},
{
"added": [
" * @return true if the row was added,",
" * false if it was not added (because it was a duplicate and we ",
" * are eliminating duplicates).",
" public boolean put(Object key, Object[] row)",
" if (remove_duplicates || keepStatistics)",
" isDuplicate = (getRemove(key, false, true) != null);",
" if (remove_duplicates && isDuplicate)",
"",
" // insert the row into the \"base\" conglomerate.",
" rowConglomerate.insertAndFetchLocation( ",
" (DataValueDescriptor[]) row, (RowLocation) btreeRow[1]);",
"",
" // create index row from hashcode and rowlocation just inserted, and",
" // insert index row into index.",
"",
" if (keepStatistics && !isDuplicate)",
"",
"",
" * @param key If the rows only have one key column then the key value. ",
" * If there is more than one key column then a KeyHasher",
" * the row (DataValueDescriptor[]) if there is exactly one row ",
" * with the key, or",
" public Object get(Object key)",
" return getRemove(key, false, false);",
" private Object getRemove(Object key, boolean remove, boolean existenceOnly)"
],
"header": "@@ -135,49 +184,60 @@ public class DiskHashtable",
"removed": [
" * @return true if the row was added,",
" * false if it was not added (because it was a duplicate and we are eliminating duplicates).",
" public boolean put( Object key, Object[] row)",
" if( remove_duplicates || keepStatistics)",
" isDuplicate = (getRemove( key, false, true) != null);",
" if( remove_duplicates && isDuplicate)",
" rowConglomerate.insertAndFetchLocation( (DataValueDescriptor[]) row, (RowLocation) btreeRow[1]);",
" if( keepStatistics && !isDuplicate)",
" * @param key If the rows only have one key column then the key value. If there is more than one",
" * key column then a KeyHasher",
" * the row (DataValueDescriptor[]) if there is exactly one row with the key",
" public Object get( Object key)",
" return getRemove( key, false, false);",
" private Object getRemove( Object key, boolean remove, boolean existenceOnly)"
]
},
{
"added": [
" ScanController scan = ",
" tc.openScan( ",
" btreeConglomerateId,",
" false, // do not hold",
" remove ? TransactionController.OPENMODE_FORUPDATE : 0,",
" TransactionController.MODE_TABLE,",
" TransactionController.ISOLATION_READ_UNCOMMITTED,",
" null, // Scan all the columns",
" scanKey,",
" ScanController.GE,",
" (Qualifier[][]) null,",
" scanKey,",
" ScanController.GT);",
" while (scan.fetchNext(btreeRow))",
" if (rowConglomerate.fetch(",
" (RowLocation) btreeRow[1], ",
" row, ",
" (FormatableBitSet) null /* all columns */)",
" if( rowCount == 1)",
" // if there is only one matching row just return row. ",
" retValue = BackingStoreHashtable.shallowCloneRow( row);",
" }",
" // if there is more than one row, return a vector of",
" // the rows.",
" //",
" // convert the \"single\" row retrieved from the",
" // first trip in the loop, to a vector with the",
" // first two rows."
],
"header": "@@ -185,37 +245,49 @@ public class DiskHashtable",
"removed": [
" ScanController scan = tc.openScan( btreeConglomerateId,",
" false, // do not hold",
" remove ? TransactionController.OPENMODE_FORUPDATE : 0,",
" TransactionController.MODE_TABLE,",
" TransactionController.ISOLATION_READ_UNCOMMITTED,",
" null, // Scan all the columns",
" scanKey,",
" ScanController.GE,",
" (Qualifier[][]) null,",
" scanKey,",
" ScanController.GT);",
" while( scan.fetchNext( btreeRow))",
" if( rowConglomerate.fetch( (RowLocation) btreeRow[1], row, (FormatableBitSet) null /* all columns */)",
" if( rowCount == 1) ",
" retValue = BackingStoreHashtable.shallowCloneRow( row); ",
" } "
]
}
]
}
] |
derby-DERBY-1940-d6e7d39f
|
DERBY-1940: Remove Ease of Development to conform to recent changes to the JDBC4 api.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@453913 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/client/org/apache/derby/client/am/LogicalConnection40.java",
"hunks": [
{
"added": [],
"header": "@@ -22,7 +22,6 @@",
"removed": [
"import java.sql.BaseQuery;"
]
}
]
},
{
"file": "java/client/org/apache/derby/client/net/NetConnection40.java",
"hunks": [
{
"added": [],
"header": "@@ -22,8 +22,6 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;"
]
}
]
},
{
"file": "java/client/org/apache/derby/jdbc/ClientConnectionPoolDataSource40.java",
"hunks": [
{
"added": [],
"header": "@@ -21,9 +21,6 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;",
"import java.sql.QueryObjectGenerator;"
]
}
]
},
{
"file": "java/client/org/apache/derby/jdbc/ClientDataSource40.java",
"hunks": [
{
"added": [],
"header": "@@ -21,9 +21,6 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;",
"import java.sql.QueryObjectGenerator;"
]
}
]
},
{
"file": "java/client/org/apache/derby/jdbc/ClientXADataSource40.java",
"hunks": [
{
"added": [],
"header": "@@ -21,9 +21,6 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;",
"import java.sql.QueryObjectGenerator;"
]
},
{
"added": [],
"header": "@@ -54,18 +51,6 @@ import org.apache.derby.shared.common.reference.SQLState;",
"removed": [
" /**",
" * Retrieves the QueryObjectGenerator for the given JDBC driver. If the",
" * JDBC driver does not provide its own QueryObjectGenerator, NULL is",
" * returned.",
" *",
" * @return The QueryObjectGenerator for this JDBC Driver or NULL if the",
" * driver does not provide its own implementation",
" * @exception SQLException if a database access error occurs",
" */",
" public QueryObjectGenerator getQueryObjectGenerator() throws SQLException {",
" return null;",
" }"
]
}
]
},
{
"file": "java/engine/org/apache/derby/iapi/jdbc/BrokeredConnection40.java",
"hunks": [
{
"added": [],
"header": "@@ -22,7 +22,6 @@",
"removed": [
"import java.sql.BaseQuery;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection40.java",
"hunks": [
{
"added": [],
"header": "@@ -22,13 +22,11 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbeddedConnectionPoolDataSource40.java",
"hunks": [
{
"added": [],
"header": "@@ -20,9 +20,6 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;",
"import java.sql.QueryObjectGenerator;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbeddedDataSource40.java",
"hunks": [
{
"added": [],
"header": "@@ -21,9 +21,6 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;",
"import java.sql.QueryObjectGenerator;"
]
}
]
},
{
"file": "java/engine/org/apache/derby/jdbc/EmbeddedXADataSource40.java",
"hunks": [
{
"added": [],
"header": "@@ -21,9 +21,6 @@",
"removed": [
"import java.sql.BaseQuery;",
"import java.sql.QueryObjectFactory;",
"import java.sql.QueryObjectGenerator;"
]
}
]
}
] |
derby-DERBY-1942-2c865dd4
|
- DERBY-1942 There exists difference between behavior of setNull(Types.TIME) and setTiime(null) - Patch by Tomohito Nakayama ([email protected])
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@464202 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1944-8c22ad03
|
DERBY-1944: jdbcapi/ParameterMappingTest.java test does not execute test for setObject(Blob/Clob) in DerbyNetClient.
Made the test execute tests for setObject(Blob) and setObject(Clob).
Patch file: derby-1944-1a.diff
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@674849 13f79535-47bb-0310-9956-ffa450edef68
|
[] |
derby-DERBY-1947-10f111c7
|
DERBY-1947 OutOfMemoryError after repeated calls to boot and shutdown a database
Committed DERBY-1947-4.diff.
git-svn-id: https://svn.apache.org/repos/asf/db/derby/code/trunk@531638 13f79535-47bb-0310-9956-ffa450edef68
|
[
{
"file": "java/engine/org/apache/derby/impl/jdbc/EmbedConnection.java",
"hunks": [
{
"added": [
"\t\tif (!isClosed() &&",
"\t\t\t\t(rootConnection == this) && ",
"\t\t\t\t(!autoCommit && !transactionIsIdle())) {",
"\t\t\tthrow newSQLException(",
"\t\t\t\tSQLState.LANG_INVALID_TRANSACTION_STATE);",
"\t\t",
"\t\tclose(exceptionClose);"
],
"header": "@@ -1141,21 +1141,14 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\tif (isClosed())",
"\t\t \treturn;",
"",
"",
"\t\tif (rootConnection == this)",
"\t\t{",
"\t\t\t/* Throw error to match DB2/JDBC if a tran is pending in non-autocommit mode */",
"\t\t\tif (!autoCommit && !transactionIsIdle()) {",
"\t\t\t\tthrow newSQLException(SQLState.LANG_INVALID_TRANSACTION_STATE);",
"\t\t\t}",
"",
"\t\t\tclose(exceptionClose);",
"\t\telse",
"\t\t\tsetInactive(); // nested connection"
]
},
{
"added": [
"\t\t\t\t\tif (tr.isActive()) {",
"\t\t\t\t\t\tsetupContextStack();",
"\t\t\t\t\t\ttry {",
"\t\t\t\t\t\t\ttr.rollback();",
"\t\t\t\t\t\t\t",
"\t\t\t\t\t\t\t// Let go of lcc reference so it can be GC'ed after",
"\t\t\t\t\t\t\t// cleanupOnError, the tr will stay around until the",
"\t\t\t\t\t\t\t// rootConnection itself is GC'ed, which is dependent",
"\t\t\t\t\t\t\t// on how long the client program wants to hold on to",
"\t\t\t\t\t\t\t// the Connection object.",
"\t\t\t\t\t\t\ttr.clearLcc(); ",
"\t\t\t\t\t\t\ttr.cleanupOnError(e);",
"\t\t\t\t\t\t\t",
"\t\t\t\t\t\t} catch (Throwable t) {",
"\t\t\t\t\t\t\tthrow handleException(t);",
"\t\t\t\t\t\t} finally {",
"\t\t\t\t\t\t\trestoreContextStack();",
"\t\t\t\t\t\t}",
"\t\t\t\t\t} else {",
"\t\t\t\t\t\t// DERBY-1947: If another connection has closed down",
"\t\t\t\t\t\t// the database, the transaction is not active, but",
"\t\t\t\t\t\t// the cleanup has not been done yet."
],
"header": "@@ -1174,22 +1167,30 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\t\t\t\tsetupContextStack();",
"\t\t\t\t\ttry {",
"\t\t\t\t\t\ttr.rollback();",
"",
"\t\t\t\t\t\t// Let go of lcc reference so it can be GC'ed after",
"\t\t\t\t\t\t// cleanupOnError, the tr will stay around until the",
"\t\t\t\t\t\t// rootConnection itself is GC'ed, which is dependent",
"\t\t\t\t\t\t// on how long the client program wants to hold on to",
"\t\t\t\t\t\t// the Connection object.",
"",
"\t\t\t\t\t} catch (Throwable t) {",
"\t\t\t\t\t\tthrow handleException(t);",
"\t\t\t\t\t} finally {",
"\t\t\t\t\t\trestoreContextStack();"
]
},
{
"added": [],
"header": "@@ -1211,9 +1212,6 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"",
"\t\t\tsetInactive();",
""
]
},
{
"added": [
"\t\ttry {",
"\t\t\t// Only close root connections, since for nested",
"\t\t\t// connections, it is not strictly necessary and close()",
"\t\t\t// synchronizes on the root connection which can cause",
"\t\t\t// deadlock with the call to runFinalization from",
"\t\t\t// GenericPreparedStatement#prepareToInvalidate (see",
"\t\t\t// DERBY-1947) on SUN VMs.",
"\t\t\tif (rootConnection == this) {",
"\t\t\t\tclose(exceptionClose);",
"\t\t\t}",
"\t\t}",
"\t\tfinally {"
],
"header": "@@ -1608,11 +1606,19 @@ public abstract class EmbedConnection implements EngineConnection",
"removed": [
"\t\tif (rootConnection == this)",
"\t\t{",
"\t\t\tif (!isClosed())",
"\t \t\tclose(exceptionClose);"
]
}
]
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.