kosurusekhar | 9 Jan 08:16 2015
Picon

How to get expected time or processing percentage of Backup | Restore | a long running query

Hi folks,

We have application using derby database, which has backup | restore |
cleanup (delete processed records from DB) options. The users of our
application asking to show the processing percentage or estimated time to
get know whether process is working or hanged.

Is there any kind of feature or script to get this.

Please let me know the possibilities.

Thanks in advance.

Regards
Sekhar.

--
View this message in context: http://apache-database.10148.n7.nabble.com/How-to-get-expected-time-or-processing-percentage-of-Backup-Restore-a-long-running-query-tp143571.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.

Varun Sawaji | 7 Jan 08:09 2015
Picon

How to Combine Apache Derby without installing with Jar Directly

Hi,

I have application which was developed using Java and Apache Derby. I want to create executable JAR and run on any system which doesnot have derby DB.When I click on Jar file the derby also install on the system. Is it possible.

Varun
Myrna van Lunteren | 15 Dec 19:50 2014
Picon

FYI: ApacheCon NA 2015 & travel assistance.

Dear all,

There will be an ApacheCon North America, in Austin, TX, April 13-17, 2015.

For more information see:
http://apachecon.com/
and
http://events.linuxfoundation.org/events/apachecon-north-america

The Call for Papers closes Sunday, 1 February 2015.

There is travel assistance available subject to the usual rules, see: http://www.apache.org/travel/
The closing date for travel assistance applications is Friday, 6 February 2015.

Regards,
Myrna van Lunteren
(Apache DB Project chair)
pzsolt | 13 Dec 19:24 2014
Picon

Primary key auto increment sometimes fails

Hi!

I have noticed that when there is a table with an auto generated primary
key, the auto incrementation by INSERT fails. Instead of increment the value
by 1, sometimes Derby increments the primary key with 100 or 1000 or other
random value.

I can't reproduce it, because it is random.

For example, I have a table named 'INVOICE', and i have inserted 4 rows, and
I get the following the auto generated keys:

1. INSERT: auto generated primary key: 806
2. INSERT: auto generated primary key: 807
3. INSERT: auto generated primary key: *904*
4. INSERT: auto generated primary key: *1004*
5. INSERT: auto generated primary key: 1005

It should be incremented by 1. The expected sequence should be:

1. INSERT: auto generated primary key: 806
2. INSERT: auto generated primary key: 807
3. INSERT: auto generated primary key: 808
4. INSERT: auto generated primary key: 809
5. INSERT: auto generated primary key: 810

C. a. 188 companies are using my Derby based software and I don't know what
to do with this random error. And I know nobody who could help me.

Has anybody met with this strange error? Do you have any suggestions, how to
start to debug it? I can't reproduce it.

Best regards,
Zsolt Pocze

--
View this message in context: http://apache-database.10148.n7.nabble.com/Primary-key-auto-increment-sometimes-fails-tp143465.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.

Kevin Luo | 12 Dec 19:50 2014
Picon

Re: ddlToDatabase writeDataToDatabase

Hi, I’ve been thrown to the bottom of valley over last 2 hours trying to figure out why I was keeping the error message:”

 

>>Returning connection org.apache.commons.dbcp.PoolableConnection <at> fc23cf to data source.

>>Remaining connections: None

 

F:\NBProject\DBTool\dist\build.xml:33:  which is <ddlToDatabase schemaFile="project-schema.xml" verbosity="debug">

org.apache.ddlutils.io.DataSinkException: java.sql.SQLException: Connection is closed.

         at org.apache.ddlutils.io.DataToDatabaseSink.end(DataToDatabaseSink.java:221)

         at org.apache.ddlutils.task.WriteDataToDatabaseCommand.execute(WriteDataToDatabaseCommand.java:187)

         at org.apache.ddlutils.task.DatabaseTaskBase.executeCommands(DatabaseTaskBase.java:341)

         at org.apache.ddlutils.task.DatabaseTaskBase.execute(DatabaseTaskBase.java:381)

         at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)

         at sun.reflect.GeneratedMethodAccessor120.invoke(Unknown Source)

         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

         at java.lang.reflect.Method.invoke(Method.java:601)

         at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)

         at org.apache.tools.ant.Task.perform(Task.java:348)

         at org.apache.tools.ant.Target.execute(Target.java:392)

         at org.apache.tools.ant.Target.performTasks(Target.java:413)

         at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)

         at org.apache.tools.ant.Project.executeTarget(Project.java:1368)

         at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)

         at org.apache.tools.ant.Project.executeTargets(Project.java:1251)

         at org.apache.tools.ant.module.bridge.impl.BridgeImpl.run(BridgeImpl.java:283)

         at org.apache.tools.ant.module.run.TargetExecutor.run(TargetExecutor.java:541)

         at org.netbeans.core.execution.RunClassThread.run(RunClassThread.java:153)

Caused by: java.sql.SQLException: Connection is closed.

 

I followed the instructions every steps away written in the derby official site http://db.apache.org/derby/integrate/db_ddlutils.html. Can anyone please shed some lights on where could it possibly go wrong?

Thanks

Kev

Alex | 12 Dec 10:50 2014

Unexpected behavior of WHEN clause in CREATE TRIGGER statement

Hello,
In the below example, I expect trigger to fire and update the 'done_date' column after update of 'status', but it doesn't. The database is a freshly created 10.11. Is this a bug in derby, or am I doing something wrong?

ij version 10.11
ij> connect 'jdbc:derby:MyDbTest;create=true';
ij> CREATE TABLE t1 (id INTEGER, done_date DATE, status CHAR(1));
0 rows inserted/updated/deleted
ij> CREATE TRIGGER tr1 AFTER UPDATE OF status ON t1 REFERENCING NEW AS newrow FOR EACH ROW WHEN (newrow.status='d') UPDATE t1 SET done_date=current_date WHERE id=newrow.id;
0 rows inserted/updated/deleted
ij> insert into t1 values (1, null, 'a');
1 row inserted/updated/deleted
ij> SELECT * FROM t1;
ID         |DONE_DATE |STA&
---------------------------
1          |NULL      |a   
 
1 row selected
ij> UPDATE t1 SET status='d';
1 row inserted/updated/deleted
ij> SELECT * FROM t1;
ID         |DONE_DATE |STA&
---------------------------
1          |NULL      |d   
 
1 row selected
ij> exit;
--
--Regards, Alex
kosurusekhar | 4 Dec 13:11 2014
Picon

Blob column behaviour, when we dont have data & when having less data

Hi Folks,

We have requirement to store the files in DB, we created a Blob column. As
per derby manuals we need to specify the size of the content while creating
the column. We have 4 different sizes of files like 200KB, 512KB, 1MB, 6MB &
some times we don't have any content into this column. If I go with max size
6MB, 

1) whether derby will occupy 6MB space for this row even i am not inserting
data in to this column?

2) whether derby will occupy complete 6MB space if I am trying to insert
small size files like 512KB or 1MB?

Please let me know how derby database behaves in this case. I am using Derby
(Network server) 10.9 version.

Thanks in Advance.

Regards
Sekhar.

--
View this message in context: http://apache-database.10148.n7.nabble.com/Blob-column-behaviour-when-we-dont-have-data-when-having-less-data-tp143363.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.

Lin Ren | 3 Dec 09:09 2014
Picon

Urgent question about JIra issue DERBY-526

Hi Guys,

 

Sorry for the broadcast… I have a quick question about issue DERBY-526, I’m currently using Derby version 10.10.1.3, and still meet the same problem:

 

When I using IPv6 JDBC URL like: “jdbc:derby://2001:db8:0:f101:0:0:0:9:1527/xxx;create=true;user=xxx;password=xxx”

 

I got the exception: java.lang.NumberFormatException: For input string: "db8:0:f101:0:0:0:9:1527"

 

My searched Jira and found out the issue 526, but seems it is still in open state, can anyone tell me whether the issue is fixed now? And in which version if yes?

 

Thanks so much!

 

Lin

John English | 29 Nov 13:25 2014
Picon

Duplicate key feature request

Something that I find crops up quite often is code to deal with duplicate keys. 
I often want to insert into a table, or update if the key already exists. In 
MySQL I can just use INSERT ... ON DUPLICATE KEY UPDATE ... for this, but with 
Derby I end up with code that looks like this:

   try {
     //... insert new row
   }
   catch (SQLException e) {
     if (e.getSQLState().equals(DUPLICATE_KEY)) {
       // ... update existing row
     }
     else {
       throw e;
     }
   }

In the absence of something like INSERT ... ON DUPLICATE KEY UPDATE, would it 
not perhaps be a good idea for Derby to subclass SQLException so that it could 
throw a (say) SQLKeyExistsException to avoid ugly repetitive code like the 
above? Or is there already something that I've overlooked that addresses this 
problem?

TIA,
--

-- 
John English

kosurusekhar | 27 Nov 12:56 2014
Picon

User Defined Type as Table Column

Hi folks,

We decided to create a table column with User Defined Type (A java class has
big byte array & certain other properties), Is there any issues or any
problems (performance side) while doing insert / update / delete into or
from this table?

If any body come across this kind of scenario means please suggest me with
your inputs.

Regards
Sekhar.

--
View this message in context: http://apache-database.10148.n7.nabble.com/User-Defined-Type-as-Table-Column-tp143321.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.

Peter Ondruška | 25 Nov 11:59 2014
Picon

Locks on crashed database

Dear all,

I have a database that has locks in SYSCS_DIAG.LOCK_TABLE. How do I remove those locks? I restarted the database but the locks are still there. SYSCS_DIAG.TRANSACTION_TABLE also has related record with status PREPARED. This database was used with XA on application server but it was removed for troubleshooting.

Thanks

--
Peter Ondruška

Gmane