Discussion:
[Firebird-devel] gbak and 64-bit I/O on linux
Damyan Ivanov
2001-11-16 07:44:01 UTC
Permalink
Hi all,

I read there's no precompiled version of Firebird RC1 for linux
with 64-bit I/O support, so I decided to build one myself (but failed).

What I did:

changed in jrd/common.h

#define UNIX_64_BIT_IO

to

#define UNIX_64_BIT_IO 1

and then build the beast:

$ ./Configure.sh PROD
$ ./Configure_SetupEnv.sh
$ make firebird

killed all processes using IB, killed all gds_* processed
copied all from interbase into /opt/interbase

Then I tried to restore a 2.5GB database (previously on multiple files),
but failed:

$ zcat backup.gbk.gz | /opt/interbase/bin/gbak -c stdin /home/base.gdb
-p 4096

all went well, all data was restored, but while creating indexes it
banged:

gbak: restoring index PLACE_NAME_INDEX
gbak: restoring index RDB$PRIMARY56
gbak: restoring index PERSON_COUNTRY
gbak: restoring index PERSON_NAME1_INDEX
gbak: cannot commit index PERSON_NAME2_INDEX
gbak: ERROR: I/O error for file "/home/base.gdb"
gbak: ERROR: Error while trying to read from file
gbak: ERROR: No such file or directory
gbak: ERROR: internal gds software consistency check (buffer marked during cache unwind (268))
gbak: ERROR: internal gds software consistency check (can't continue after bugcheck)
gbak: Exiting before completion due to errors

the file base.gdb is exactly 2G - 1 bytes long.
there is plenty of free splace on the temporary drive.
The filesystem (ext2) supports files larger than 2G, tested with a 5G file.
kernel is 2.4.13
libc is 2.2.4
gcc is 2.95.4

gbak version is:

$ /opt/interbase/bin/gbak -z
gbak: gbak version LI-T6.2.576 Firebird Release Candidate 1


So the general quiestion is: Is what I did (change in jrd/common.h)
sufficient to get 64-bit IO support? If not what more I shuild tweak?
Will gbak benefit from it?
--
Damyan Ivanov Creditreform Bulgaria
***@creditreform.bg http://www.creditreform.bg/
phone: +359 2 928 2611, 929 3993 fax: +359 2 920 0994
mobile: +359 88 566067
Damyan Ivanov
2001-11-16 09:24:01 UTC
Permalink
On Fri, Nov 16, 2001 at 11:43:00AM +0200
Post by Damyan Ivanov
Hi all,
I read there's no precompiled version of Firebird RC1 for linux
with 64-bit I/O support, so I decided to build one myself (but failed).
Replying to myself:

I overlooked jrd/common.h and made the change in Darwin section instead
to add it into Linux section.

Tahks to Serge Levenkov.


Now building...


Damyan
--
Damyan Ivanov Creditreform Bulgaria
***@creditreform.bg http://www.creditreform.bg/
phone: +359 2 928 2611, 929 3993 fax: +359 2 920 0994
mobile: +359 88 566067
Pavel Cisar
2001-11-16 17:52:02 UTC
Permalink
Hi,
Post by John Bellardo
We need to add a flag or something to the build process that defines
that macro on mixed 32/64 bit platforms (like linux) to make it easy to
do both builds. The lack of that mechanism is the only reason a 64 bit
build of RC1 isn't done yet (and maybe time, too).
I did the 64 bit RC1 Linux version, but I haven't enough time to test
it to the extent that would satisfy me. So we decided to do not
slow down the (eagerly awaited) RC1 release by it. I'm not sure if I'll
have enough time in next two weeks to provide the 64-bit Linux
builds, as other duties draw me elsewhere, but I'll try. While I was
at 64-bit builds, I thought about the update in make process to do
32/64 builds easily, but I didn't do that because we probably roll
over to FB2 codebase shortly, so it's not worth the effort (I can
change common.h on my local drive in a second, so why bother to
spend time to hack old makefiles and scripts). Of course, it would
be a plus for users who want to build their own engine from
sources, but current fb developers knows where to stab the knife.
Well, we can do that as part of final conservation of FB1 codebase
,-)

Regards
-- Pavel
Pavel Cisar
2001-11-16 19:41:03 UTC
Permalink
John,
Post by John Bellardo
The new FB2 build processes uses autoconf. It should be straight
forward to have autoconf decide between a 32 and 64 bit build based on
the machine configuration. A parameter to autoconf, for example
./configure --with-64-bit-io, could force a 64 or 32 bit build. I think
the FB1 code will still be used for a while, at least until we can show
the FB2 codebase is performing on par with FB1. Based on that
observation I think we should modify the build system for FB1 to make
our users life easier.
Amen brother, I am a convert now :-) Anyone here to voluntary do
that ? I will probably not have a time to brush my teeth on that for
next few weeks.

Regards
-- Pavel

There is nothing wrong with InterBase
that Firebird can't fix for you
http://www.firebirdsql.org
Neil McCalden
2001-11-17 00:16:02 UTC
Permalink
Post by Pavel Cisar
Post by John Bellardo
We need to add a flag or something to the build process that defines
that macro on mixed 32/64 bit platforms (like linux) to make it easy to
do both builds. The lack of that mechanism is the only reason a 64 bit
build of RC1 isn't done yet (and maybe time, too).
I did the 64 bit RC1 Linux version, but I haven't enough time to test
it to the extent that would satisfy me. So we decided to do not
slow down the (eagerly awaited) RC1 release by it.
This is the basically the situation with 64bit IO for Solaris as well. I
have have working on it during the last couple of weeks and it has ended
requiring more changes than expected due to conflicts with ib_stdio.

Classic engine is reading/writing big db ok from isql, gstat/gfix ok,
gbak can generate a backup file > 2Gb ok but restoring a database fails
at the 2Gb point. This is odd because the open/write procedures it uses
on the database are the same pio routines used by isql which operate on
a >2Gb database with out problem. Trussing gbak shows it using
opening/creating the new database with the 64bit version of open, any
suggestions very welcome.

Super engine seems ok for everything - it uses pread/pwrite which needed
changing - but as I don't use super it has had only basic testing.

My current plan is to build clean 32bit only versions for the release
files and a 64bit super as a snapshot build with the differences as a
patch within the cvs.
--
Neil McCalden @home ***@zizz.org
John Bellardo
2001-11-16 13:15:08 UTC
Permalink
Right,
Post by Damyan Ivanov
On Fri, Nov 16, 2001 at 11:43:00AM +0200
Post by Damyan Ivanov
Hi all,
I read there's no precompiled version of Firebird RC1 for linux
with 64-bit I/O support, so I decided to build one myself (but failed).
I overlooked jrd/common.h and made the change in Darwin section instead
to add it into Linux section.
Tahks to Serge Levenkov.
Now building...
We need to add a flag or something to the build process that defines
that macro on mixed 32/64 bit platforms (like linux) to make it easy to
do both builds. The lack of that mechanism is the only reason a 64 bit
build of RC1 isn't done yet (and maybe time, too).

-John
John Bellardo
2001-11-16 18:00:03 UTC
Permalink
Post by Pavel Cisar
Hi,
Post by John Bellardo
We need to add a flag or something to the build process that defines
that macro on mixed 32/64 bit platforms (like linux) to make it easy to
do both builds. The lack of that mechanism is the only reason a 64 bit
build of RC1 isn't done yet (and maybe time, too).
I did the 64 bit RC1 Linux version, but I haven't enough time to test
it to the extent that would satisfy me. So we decided to do not
slow down the (eagerly awaited) RC1 release by it. I'm not sure if I'll
have enough time in next two weeks to provide the 64-bit Linux
builds, as other duties draw me elsewhere, but I'll try. While I was
at 64-bit builds, I thought about the update in make process to do
32/64 builds easily, but I didn't do that because we probably roll
over to FB2 codebase shortly, so it's not worth the effort (I can
change common.h on my local drive in a second, so why bother to
spend time to hack old makefiles and scripts). Of course, it would
be a plus for users who want to build their own engine from
sources, but current fb developers knows where to stab the knife.
Well, we can do that as part of final conservation of FB1 codebase
,-)
For what it is worth I've been running the 64 bit Darwin version for a
while now without any problems. The RC1 build (all Darwin builds are 64
bit) I did ran through TCS fine, so I don't think the 64 bit code hurt
the DB :) I have a small IBPerl script I wrote that creates a >4 GB
database. I've run it a number of times on my system. After the DB is
created I run gfix, gstat, abd gbak on it without any problems. I also
try adding a few indexes. All without problems. I don't know how you
plan on testing the 64 bit build, but I can send you the perl script if
you like.

-John
John Bellardo
2001-11-16 18:18:20 UTC
Permalink
Damyan,
Post by Damyan Ivanov
Hi all,
I read there's no precompiled version of Firebird RC1 for linux
with 64-bit I/O support, so I decided to build one myself (but failed).
gbak: restoring index PLACE_NAME_INDEX
gbak: restoring index RDB$PRIMARY56
gbak: restoring index PERSON_COUNTRY
gbak: restoring index PERSON_NAME1_INDEX
gbak: cannot commit index PERSON_NAME2_INDEX
gbak: ERROR: I/O error for file "/home/base.gdb"
gbak: ERROR: Error while trying to read from file
gbak: ERROR: No such file or directory
gbak: ERROR: internal gds software consistency check (buffer marked
during cache unwind (268))
gbak: ERROR: internal gds software consistency check (can't continue
after bugcheck)
gbak: Exiting before completion due to errors
[...]
Believe it or not this is good news. The new 64 bit io code allows the
32 bit builds (your was 32 bit because the macro was defined in the
wrong location; see previous responses in this thread) to throw an error
when the database exceeds 2GB - 1 in size. Before the engine would
silently trash the DB file :( Also, with the help of Ann, we were able
to fix a long standing bug where the engine would go into an infinite
loop if there was an error writing to a DB file.

-John
John Bellardo
2001-11-16 18:23:09 UTC
Permalink
Pavel,
Post by Pavel Cisar
Hi,
Post by John Bellardo
We need to add a flag or something to the build process that defines
that macro on mixed 32/64 bit platforms (like linux) to make it easy to
do both builds. The lack of that mechanism is the only reason a 64 bit
build of RC1 isn't done yet (and maybe time, too).
I did the 64 bit RC1 Linux version, but I haven't enough time to test
it to the extent that would satisfy me. So we decided to do not
slow down the (eagerly awaited) RC1 release by it. I'm not sure if I'll
have enough time in next two weeks to provide the 64-bit Linux
builds, as other duties draw me elsewhere, but I'll try. While I was
at 64-bit builds, I thought about the update in make process to do
32/64 builds easily, but I didn't do that because we probably roll
over to FB2 codebase shortly, so it's not worth the effort (I can
change common.h on my local drive in a second, so why bother to
spend time to hack old makefiles and scripts). Of course, it would
be a plus for users who want to build their own engine from
sources, but current fb developers knows where to stab the knife.
Well, we can do that as part of final conservation of FB1 codebase
,-)
The new FB2 build processes uses autoconf. It should be straight
forward to have autoconf decide between a 32 and 64 bit build based on
the machine configuration. A parameter to autoconf, for example
./configure --with-64-bit-io, could force a 64 or 32 bit build. I think
the FB1 code will still be used for a while, at least until we can show
the FB2 codebase is performing on par with FB1. Based on that
observation I think we should modify the build system for FB1 to make
our users life easier.

-John
Leyne, Sean
2001-11-16 18:34:03 UTC
Permalink
John,
I think the FB1 code will still be used for a while,
at least until we can show
the FB2 codebase is performing on par with FB1.
I, for one, hope that we can get to the FB2 code as soon as possible --
like before the new year.

Mind you I had expected that we would have had RC1 released by the end
of Sept -- so what do I know ;-[


Sean
John Bellardo
2001-11-16 19:48:01 UTC
Permalink
Post by Pavel Cisar
John,
Post by John Bellardo
The new FB2 build processes uses autoconf. It should be straight
forward to have autoconf decide between a 32 and 64 bit build based on
the machine configuration. A parameter to autoconf, for example
./configure --with-64-bit-io, could force a 64 or 32 bit build. I
think
the FB1 code will still be used for a while, at least until we can show
the FB2 codebase is performing on par with FB1. Based on that
observation I think we should modify the build system for FB1 to make
our users life easier.
Amen brother, I am a convert now :-) Anyone here to voluntary do
that ? I will probably not have a time to brush my teeth on that for
next few weeks.
It is one of those things on my list, but I'm currently tied up in FB2.
I have to think about the best way to do it. I'll try to take a look at
it next week.

-John
John Bellardo
2001-11-17 20:30:02 UTC
Permalink
Hi all,
Post by Pavel Cisar
John,
Post by John Bellardo
The new FB2 build processes uses autoconf. It should be straight
forward to have autoconf decide between a 32 and 64 bit build based on
the machine configuration. A parameter to autoconf, for example
./configure --with-64-bit-io, could force a 64 or 32 bit build. I
think
the FB1 code will still be used for a while, at least until we can show
the FB2 codebase is performing on par with FB1. Based on that
observation I think we should modify the build system for FB1 to make
our users life easier.
Amen brother, I am a convert now :-) Anyone here to voluntary do
that ? I will probably not have a time to brush my teeth on that for
next few weeks.
I had a few down minutes today, so I tackled getting the 64 bit stuff
into the build system. The configure script prompts the user for a 32
or 64 bit build on all platforms except Darwin. As platform maintainers
decide 64 bit builds work for even OS version on their platform they can
modify the Configure.sh script to not ask the user and just build 64
bit. Same is true for platforms that _only_ support 32 bit. I'm
testing out the changes now and should be able to post them tonight.

-John
Damyan Ivanov
2001-11-19 06:29:02 UTC
Permalink
John,
Post by John Bellardo
I had a few down minutes today, so I tackled getting the 64 bit stuff
into the build system. The configure script prompts the user for a 32
or 64 bit build on all platforms except Darwin.
Thank you very much for your efforts.

Today I've updated from CVS (now using build 583), built withiut any
modification (answering 'yes' on 64-bit i/o question), but I still can't
restore that 3GB gdb. Error remains:

gbak: cannot commit index PERSON_NAME2_INDEX
gbak: ERROR: I/O error for file "/home/credo/credo.gdb"
gbak: ERROR: Error while trying to read from file
gbak: ERROR: No such file or directory
gbak: ERROR: internal gds software consistency check (buffer marked during cache unwind (268))
gbak: ERROR: internal gds software consistency check (can't continue after bugcheck)
gbak: Exiting before completion due to errors
gbak: ERROR: internal gds software consistency check (can't continue after bugcheck)
gbak: ERROR: internal gds software consistency check (can't continue after bugcheck)

It seems (at least to me) that something weird happened (this may be
detection of attempt to write past 2GB when such write is not supported
for one reason or another), then FB tries to 'unwind' the cache and
suddenly discovers that cache is 'marked'.
What this all means is unknown to me.

May be this has something to do that at the moment of the error gbak is
activating indexes. I'll try with a test database to see if anything bad
happens.


Damyan
--
Damyan Ivanov Creditreform Bulgaria
***@creditreform.bg http://www.creditreform.bg/
phone: +359 2 928 2611, 929 3993 fax: +359 2 920 0994
mobile: +359 88 566067
John Bellardo
2001-11-18 12:16:44 UTC
Permalink
OK,
Post by Damyan Ivanov
Hi all,
[...]
I had a few down minutes today, so I tackled getting the 64 bit stuff
into the build system. The configure script prompts the user for a 32
or 64 bit build on all platforms except Darwin. As platform
maintainers decide 64 bit builds work for even OS version on their
platform they can modify the Configure.sh script to not ask the user
and just build 64 bit. Same is true for platforms that _only_ support
32 bit. I'm testing out the changes now and should be able to post
them tonight.
I've committed my changes now. The Configure.sh script now asks the
user if they want a 32 or 64 bit build. This should work until FB2 and
autoconf gets off the ground.

-John
John Bellardo
2001-11-19 13:33:04 UTC
Permalink
Damyan,
Post by Damyan Ivanov
John,
Post by John Bellardo
I had a few down minutes today, so I tackled getting the 64 bit stuff
into the build system. The configure script prompts the user for a 32
or 64 bit build on all platforms except Darwin.
Thank you very much for your efforts.
Today I've updated from CVS (now using build 583), built withiut any
modification (answering 'yes' on 64-bit i/o question), but I still can't
gbak: cannot commit index PERSON_NAME2_INDEX
gbak: ERROR: I/O error for file "/home/credo/credo.gdb"
gbak: ERROR: Error while trying to read from file
gbak: ERROR: No such file or directory
gbak: ERROR: internal gds software consistency check (buffer marked
during cache unwind (268))
gbak: ERROR: internal gds software consistency check (can't continue
after bugcheck)
gbak: Exiting before completion due to errors
gbak: ERROR: internal gds software consistency check (can't continue
after bugcheck)
gbak: ERROR: internal gds software consistency check (can't continue
after bugcheck)
It seems (at least to me) that something weird happened (this may be
detection of attempt to write past 2GB when such write is not supported
for one reason or another), then FB tries to 'unwind' the cache and
suddenly discovers that cache is 'marked'.
What this all means is unknown to me.
May be this has something to do that at the moment of the error gbak is
activating indexes. I'll try with a test database to see if anything bad
happens.
What version of linux are you running? How big was the database when
the restore stopped? If it was still 2GB - 1 then are you sure you have
your 64 bit build installed correctly? What are the contents of the
"jrd/64bitio.h" file?

-John
Leyne, Sean
2001-11-19 15:11:05 UTC
Permalink
Damyan,

Are you sure that the filie system of partition, which the database is
stored on, supports large files (i.e. 64bit I/O)?


Sean
Damyan Ivanov
2001-11-20 09:50:02 UTC
Permalink
On Mon, Nov 19, 2001 at 12:09:44PM -0500
Post by John Bellardo
Damyan,
Are you sure that the filie system of partition, which the database is
stored on, supports large files (i.e. 64bit I/O)?
Yes. There was a program on this list (URL and program lost, but I may
find the time to write one), which tested and confirmed that it is
posible to work with files > 4G.

dd surely creates a 5G file without a problem, but dd uses a sequential
mechanism.


Even if I try to open a gdb exactly 2G long I get "File too large".

This error is returned bu open(filename, openmode) if file is over 2G
and openmode does not contain O_LARGEFILE.

I played a little with the problem and it seems that PIO_open (unix.c)
for example calls open(filename, openmode) without setting O_LARGEFILE
bit of openmode. I manually added openmode |= O_LARGEFILE, but this
fails at compilation with "undefined O_LARGEFILE". O_LARGEFILE is
defined in <fcntl.h> if macro __USE_LARGEFILE64 is defined. However,
defining this macro in 64bitio.h (which is included from common.h) and
moving #include "../jrd/common.h" before #include <fcntl.h> did not
solve the problem and I still get compilation errors.

There is another macro for 64-bit lseek: __USE_FILE_OFFSET64. I tried
putting it into 64bitio.h also.

One last glitch I did: defining LSEEK_OFFSET_CAST to (loff_t), which is
a 64-bit off_t - the type used for file offsets.

I think the problem is in the failed compilation even when
__USE_LARGEFILE64 is defined. I'll continue digging from time to time,
but schedule squeezes me.


Damyan
Damyan Ivanov
2001-11-20 12:34:07 UTC
Permalink
On Mon, Nov 19, 2001 at 12:09:44PM -0500
Post by John Bellardo
Damyan,
Are you sure that the filie system of partition, which the database is
stored on, supports large files (i.e. 64bit I/O)?
Yes. There was a program on this list (URL and program lost, but I may
find the time to write one), which tested and confirmed that it is
posible to work with files > 4G.

dd surely creates a 5G file without a problem, but dd uses a sequential
mechanism.


Even if I try to open a gdb exactly 2G long I get "File too large".

This error is returned bu open(filename, openmode) if file is over 2G
and openmode does not contain O_LARGEFILE.

I played a little with the problem and it seems that PIO_open (unix.c)
for example calls open(filename, openmode) without setting O_LARGEFILE
bit of openmode. I manually added openmode |= O_LARGEFILE, but this
fails at compilation with "undefined O_LARGEFILE". O_LARGEFILE is
defined in <fcntl.h> if macro __USE_LARGEFILE64 is defined. However,
defining this macro in 64bitio.h (which is included from common.h) and
moving #include "../jrd/common.h" before #include <fcntl.h> did not
solve the problem and I still get compilation errors.

There is another macro for 64-bit lseek: __USE_FILE_OFFSET64. I tried
putting it into 64bitio.h also.

One last glitch I did: defining LSEEK_OFFSET_CAST to (loff_t), which is
a 64-bit off_t - the type used for file offsets.

I think the problem is in the failed compilation even when
__USE_LARGEFILE64 is defined. I'll continue digging from time to time,
but schedule squeezes me.


Damyan
Damyan Ivanov
2001-11-20 12:45:07 UTC
Permalink
I've found that program that checks if 64-bit i/o is ok (attached).

As far as I see the only special is:

#define _FILE_OFFSET_BITS 64
#include <unistd.h>

I'll try this in 64bitio.h and see what happens.


Damyan
John Bellardo
2001-11-20 14:12:02 UTC
Permalink
Damyan,
Post by Damyan Ivanov
I've found that program that checks if 64-bit i/o is ok (attached).
#define _FILE_OFFSET_BITS 64
#include <unistd.h>
Right, that should be all that is needed. Try moving the #include
"../jrd/common.h" line in unix.c (line 40) so that it is the first
#include line in the file (around line 30). That might make a
difference.

-John
Damyan Ivanov
2001-11-21 12:43:04 UTC
Permalink
John,
Post by John Bellardo
Post by Damyan Ivanov
#define _FILE_OFFSET_BITS 64
#include <unistd.h>
Right, that should be all that is needed. Try moving the #include
"../jrd/common.h" line in unix.c (line 40) so that it is the first
#include line in the file (around line 30). That might make a
difference.
You're right! I finally succeeded in building firebird with 64-bit file
i/o.

I did not stress-test, because we already launched the production
server, but on my workstation I successfully created 2.3G gdb file (25e6
rows is one table) and then some indexes. The same test failed before.
Unfortunately there's no 4G spare space around, so 4G limit is also not
tested.

Here's the diff. Basically two changes:

1) add $define _FILE_OFFSET_BITS 64 and #include <unistd.h> in
64bitio.h.

2) move common.h to be the first include in jrd/unix.c. I wander if this
would break anything on other platforms.


Thank you for all the information and hits you provided.

Damyan

----------------------------------8<----------------------
diff -r -u interbase-592/Configure.sh interbase/Configure.sh
--- interbase-592/Configure.sh Sun Nov 18 23:27:47 2001
+++ interbase/Configure.sh Wed Nov 21 14:01:59 2001
@@ -299,6 +299,8 @@
echo "#ifndef _64_BIT_IO_H" > jrd/64bitio.h
echo "#define _64_BIT_IO_H" >> jrd/64bitio.h
echo "#define UNIX_64_BIT_IO" >> jrd/64bitio.h
+ echo "#define _FILE_OFFSET_BITS 64" >> jrd/64bitio.h
+ echo "#include <unistd.h>" >> jrd/64bitio.h
echo "#endif" >> jrd/64bitio.h
BuildIOsize=64
else
diff -r -u interbase-592/jrd/unix.c interbase/jrd/unix.c
--- interbase-592/jrd/unix.c Mon Oct 29 21:45:30 2001
+++ interbase/jrd/unix.c Wed Nov 21 14:02:10 2001
@@ -28,6 +28,7 @@
#define LOCAL_SHLIB_DEFS
#endif

+#include "../jrd/common.h"
#include "../jrd/ib_stdio.h"
#include <fcntl.h>
#include <errno.h>
@@ -37,7 +38,6 @@
#include <sys/stat.h>
#include <string.h>

-#include "../jrd/common.h"
#if !(defined SEEK_END && defined F_OK)
#include <unistd.h>
#endif
---------------------------------8<---------------------
--
Damyan Ivanov Creditreform Bulgaria
***@creditreform.bg http://www.creditreform.bg/
phone: +359 2 928 2611, 929 3993 fax: +359 2 920 0994
mobile: +359 88 566067
John Bellardo
2001-11-20 14:23:02 UTC
Permalink
Damyan,
Post by Damyan Ivanov
On Mon, Nov 19, 2001 at 12:09:44PM -0500
Post by John Bellardo
Damyan,
Are you sure that the filie system of partition, which the database is
stored on, supports large files (i.e. 64bit I/O)?
Yes. There was a program on this list (URL and program lost, but I may
find the time to write one), which tested and confirmed that it is
posible to work with files > 4G.
dd surely creates a 5G file without a problem, but dd uses a sequential
mechanism.
Right, but as you said, sequential is different.
Post by Damyan Ivanov
Even if I try to open a gdb exactly 2G long I get "File too large".
This error is returned bu open(filename, openmode) if file is over 2G
and openmode does not contain O_LARGEFILE.
To the best of my understanding the O_LARGEFILE flag only applies when
32 bit operations are being used. So if you have a 32 bit engine that
wants to open a file over 2GB, then you need O_LARGEFILE. If you are
using a 64 bit engine then the flag isn't needed.
Post by Damyan Ivanov
I played a little with the problem and it seems that PIO_open (unix.c)
for example calls open(filename, openmode) without setting O_LARGEFILE
bit of openmode. I manually added openmode |= O_LARGEFILE, but this
fails at compilation with "undefined O_LARGEFILE". O_LARGEFILE is
defined in <fcntl.h> if macro __USE_LARGEFILE64 is defined. However,
defining this macro in 64bitio.h (which is included from common.h) and
moving #include "../jrd/common.h" before #include <fcntl.h> did not
solve the problem and I still get compilation errors.
OK, I'll elaborate on my last message a little more. Don't make any
changes other than moving the common.h #include line. All the macros
should be set properly already. It is possible the macros need to be
set before <sys/types.h> gets included (it may well be included from
<fcntl.h>). You shouldn't need O_LARGEFILE or any different macro to
enable 64 bit functions. The test program you reposted to the list
shows that.
Post by Damyan Ivanov
There is another macro for 64-bit lseek: __USE_FILE_OFFSET64. I tried
putting it into 64bitio.h also.
One last glitch I did: defining LSEEK_OFFSET_CAST to (loff_t), which is
a 64-bit off_t - the type used for file offsets.
You probably shouldn't change LSEEK_OFFSET_CAST to loff_t. If the macro
that enables the 64 bit functions was picked up correctly (it obviously
isn't in your build) then off_t should be loff_t. By setting the
parameter size yourself you could be asking for trouble.
Post by Damyan Ivanov
I think the problem is in the failed compilation even when
__USE_LARGEFILE64 is defined. I'll continue digging from time to time,
but schedule squeezes me.
-John
Loading...