Discussion:
WriteFile()
(too old to reply)
Frank A. Uepping
2005-04-08 20:32:53 UTC
Permalink
Hello,

Can we always assume that when WriteFile() (in synchronous operation
mode) returns successfully it has written the requested number of bytes,
i.e. nNumberOfBytesToWrite == *lpNumberOfBytesWritten?

Or is it legal for WriteFile() to return successfully without having
written all requested bytes, i.e. nNumberOfBytesToWrite >
*lpNumberOfBytesWritten?

(I assume the latter, otherwise I see no sense for having
lpNumberOfBytesWritten.)

Thanks
FAU
Chris Burnette
2006-06-18 04:05:51 UTC
Permalink
It is legal for WriteFile to return successfully without writing all
requested bytes.

Chris
Post by Frank A. Uepping
Hello,
Can we always assume that when WriteFile() (in synchronous operation mode)
returns successfully it has written the requested number of bytes, i.e.
nNumberOfBytesToWrite == *lpNumberOfBytesWritten?
Or is it legal for WriteFile() to return successfully without having
written all requested bytes, i.e. nNumberOfBytesToWrite >
*lpNumberOfBytesWritten?
(I assume the latter, otherwise I see no sense for having
lpNumberOfBytesWritten.)
Thanks
FAU
Hector Santos
2006-06-18 04:05:53 UTC
Permalink
Post by Chris Burnette
It is legal for WriteFile to return successfully without writing all
requested bytes.
Chris, under what conditions?

This should only be possible in ASYNC mode. In SYNC mode, if I say X
bytes, it better come back as X bytes written. Otherwise, how can any trust
application your file I/O designs?

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Hector Santos
2005-04-09 11:35:49 UTC
Permalink
Chris said it was possible.

Unless he knows something unbeknownst to me, I don't think so. I can't
think of any normal condition where this is possible.

In sync mode, the WriteFile() operation must complete, otherwise there is a
underlining problem. Like possibly out of disk space, in which case you
can get less than the requested write amount. WriteFile() returns false the
extended error is ERROR_DISK_FULL.

In async mode, it is possible to get less than the requested write. That's
the whole purpose of async.

So you are correct, the lpNumberOfBytesWritten is redundant in SYNC mode,
and this is highlighted by the MSDN documentation where it says it can be
NULL in sync mode (starting with Windows 2000).

Just think about it. In sync mode, what physical barrier could there be
that can produce partial write?

I can only see a disk error, or maybe the handle becoming invalid or some
other conflict while the write is happening, but these should also produce a
FALSE result. You should never get TRUE for errors otherwise the world
will blow up. <g>

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Frank A. Uepping
Hello,
Can we always assume that when WriteFile() (in synchronous operation
mode) returns successfully it has written the requested number of bytes,
i.e. nNumberOfBytesToWrite == *lpNumberOfBytesWritten?
Or is it legal for WriteFile() to return successfully without having
written all requested bytes, i.e. nNumberOfBytesToWrite >
*lpNumberOfBytesWritten?
(I assume the latter, otherwise I see no sense for having
lpNumberOfBytesWritten.)
Thanks
FAU
Frank A. Uepping
2005-04-09 12:56:38 UTC
Permalink
Post by Hector Santos
Chris said it was possible.
Unless he knows something unbeknownst to me, I don't think so. I can't
think of any normal condition where this is possible.
In sync mode, the WriteFile() operation must complete, otherwise there is a
underlining problem. Like possibly out of disk space, in which case you
can get less than the requested write amount. WriteFile() returns false the
extended error is ERROR_DISK_FULL.
It would also be thinkable that WriteFile() returns true (successful!)
with less bytes written than requested (because this is what the file
system layer was able to process till the error happens), while the
error becomes pending. The next call to the file system layer will then
reveal the error.
Post by Hector Santos
So you are correct, the lpNumberOfBytesWritten is redundant in SYNC mode,
and this is highlighted by the MSDN documentation where it says it can be
NULL in sync mode (starting with Windows 2000).
Hmm, I read this: "If lpOverlapped is NULL, lpNumberOfBytesWritten
cannot be NULL.". In SYNC mode we have lpOverlapped set to NULL,
consequently lpNumberOfBytesWritten is mandatory. Therefore I think
the lpNumberOfBytesWritten should be checked anyway.
Post by Hector Santos
Just think about it. In sync mode, what physical barrier could there be
that can produce partial write?
I can only see a disk error, or maybe the handle becoming invalid or some
other conflict while the write is happening, but these should also produce a
FALSE result.
Don't forget that WriteFile() is not only used together with the file
system layer. It is more a generic mechanism used by devices as well!

Thanks
FAU
Gary Chanson
2005-04-09 19:27:57 UTC
Permalink
Post by Frank A. Uepping
Post by Hector Santos
Chris said it was possible.
Unless he knows something unbeknownst to me, I don't think so. I can't
think of any normal condition where this is possible.
In sync mode, the WriteFile() operation must complete, otherwise there is a
underlining problem. Like possibly out of disk space, in which case you
can get less than the requested write amount. WriteFile() returns false the
extended error is ERROR_DISK_FULL.
It would also be thinkable that WriteFile() returns true (successful!)
with less bytes written than requested (because this is what the file
system layer was able to process till the error happens), while the
error becomes pending. The next call to the file system layer will then
reveal the error.
It would be "thinkable" but it's not implemented or documented this way.
Post by Frank A. Uepping
Don't forget that WriteFile() is not only used together with the file
system layer. It is more a generic mechanism used by devices as well!
True but device drivers are supposed to live within Microsoft's rules.
Someone will have to offer an example of an exception before you've made your
case.
--
-GJC [MS Windows SDK MVP]
-Software Consultant (Embedded systems and Real Time Controls)
- http://www.mvps.org/ArcaneIncantations/consulting.htm
-***@mvps.org
Alexander Grigoriev
2006-06-18 04:05:55 UTC
Permalink
A serial driver may send less than requested, because of a timeout. This
does not qualify as an error, because you will know exactly how much is
sent.
Post by Gary Chanson
Post by Frank A. Uepping
Don't forget that WriteFile() is not only used together with the file
system layer. It is more a generic mechanism used by devices as well!
True but device drivers are supposed to live within Microsoft's rules.
Someone will have to offer an example of an exception before you've made your
case.
--
-GJC [MS Windows SDK MVP]
-Software Consultant (Embedded systems and Real Time Controls)
- http://www.mvps.org/ArcaneIncantations/consulting.htm
Hector Santos
2006-06-18 04:05:56 UTC
Permalink
Post by Alexander Grigoriev
Post by Gary Chanson
True but device drivers are supposed to live within Microsoft's rules.
Someone will have to offer an example of an exception before you've made
your case.
A serial driver may send less than requested, because of a timeout. This
does not qualify as an error, because you will know exactly how much is
sent.
But how is this expose to the upper application Win32 Layer?

For a synchronous RS232 serial WIN32 file handle, AFAIK it is not currently
possible to prepare a timeout for WriteFile(). You can only prepare a read
timeout with SetCommTimeout(). For writing, you need to prepare it under
ASYNC mode to get a non-blocking behavior.

For a synchronous socket device, you can get a WSAETIMEOUT error but this
is an error condition.

In any case, I think what is important to note in this thread is that the
integrity of the system is at stake if such "fuzzy" unknowns was true. In a
Win32 design with SYNC behavior expectations, a partial write represents an
error condition. While anything is possible at the device layer, the design
expectations for the application layer WriteFile() function in synchronous
mode is to complete the write.

I'm seriously interested to know when is this is not the case in SYNC mode
because I have a large virtual device product design investment around these
fundamental WIN32 Async/Sync file I/O principles.

Thanks

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Alexander Grigoriev
2005-04-10 04:45:36 UTC
Permalink
COMMTIMEOUTS structure contains WriteTotalTimeoutMultiplier and
WriteTotalTimeoutConstant.
Post by Gary Chanson
Post by Alexander Grigoriev
Post by Gary Chanson
True but device drivers are supposed to live within Microsoft's
rules.
Post by Alexander Grigoriev
Post by Gary Chanson
Someone will have to offer an example of an exception before you've made
your case.
A serial driver may send less than requested, because of a timeout. This
does not qualify as an error, because you will know exactly how much is
sent.
But how is this expose to the upper application Win32 Layer?
For a synchronous RS232 serial WIN32 file handle, AFAIK it is not currently
possible to prepare a timeout for WriteFile(). You can only prepare a read
timeout with SetCommTimeout(). For writing, you need to prepare it under
ASYNC mode to get a non-blocking behavior.
Hector Santos
2005-04-10 08:04:11 UTC
Permalink
Post by Alexander Grigoriev
COMMTIMEOUTS structure contains WriteTotalTimeoutMultiplier and
WriteTotalTimeoutConstant.
Sorry Alexander, you are right. I didn't correctly state it.

You can setup a serial write timeout to have a sync WriteFile() call to
return (and without an error).

But now your design "mentality" includes async-like design expectations when
you programmatically:

a) Prepare your timeouts as such that it could come back, and
b) you prepare a WriteFile() loop to complete all the partial writes.

However, if we are strickly thinking sync and all write timeouts are zero,
it is a 100% blocked sync call and WriteFile *SHOULD NOT* come back until
the write is 100% complete. Anything different is expected to be an error
condition.

So I guess, the answer to the OP's equation is:

Unless you prepare the device to behave in async or with timeout parameters,
if possible for the specific device type, then yes, it is safe to assume a
sync WritFile() call will have the request equal the write amount. It has
too. Otherwise the world will blow up.

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Alexander Grigoriev
2006-06-18 04:06:01 UTC
Permalink
The timeout may occur no matter whether overlapped or synchronous I/O is
used. It may be very unusual for an app to use the driver-provided write
timeout, but nevertheless it's supported.
Post by Hector Santos
Post by Alexander Grigoriev
COMMTIMEOUTS structure contains WriteTotalTimeoutMultiplier and
WriteTotalTimeoutConstant.
Sorry Alexander, you are right. I didn't correctly state it.
You can setup a serial write timeout to have a sync WriteFile() call to
return (and without an error).
But now your design "mentality" includes async-like design expectations when
a) Prepare your timeouts as such that it could come back, and
b) you prepare a WriteFile() loop to complete all the partial writes.
However, if we are strickly thinking sync and all write timeouts are zero,
it is a 100% blocked sync call and WriteFile *SHOULD NOT* come back until
the write is 100% complete. Anything different is expected to be an error
condition.
Unless you prepare the device to behave in async or with timeout parameters,
if possible for the specific device type, then yes, it is safe to assume a
sync WritFile() call will have the request equal the write amount. It has
too. Otherwise the world will blow up.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Hector Santos
2006-06-18 04:06:04 UTC
Permalink
Post by Alexander Grigoriev
The timeout may occur no matter whether overlapped or synchronous I/O is
used.
If it was not programmable or documented as expected possible behavior, I
would only expect it to occur under error conditions. If no extended error
is set, then thats a flaw.
Post by Alexander Grigoriev
It may be very unusual for an app to use the driver-provided write
timeout, but nevertheless it's supported.
Of course, everything can be "controlled." I'm sure the device I/O
capabilties may be exposed as well. But this may not be the design called
for and expected behavior under normal circumstances.

Of all the standard WIN32 devices, I believe the serial device is the only
device where there is implicit SetXXXXXTimeout() function available.

Anyway, I will stand by my design principle here. A robust WIN32
application should expect the request amount equal written amount for a 100%
synchronous blocked WriteFile() call. Anything else is an error condition
whether its a timeout or not.

Just consider the consequences of what you are suggesting. Every function
WriteFile() call would have to be replaced with a buffer flushing loop
design. This would have to be done to keep a program intact.

Is this what you are saying, that using WriteFile() with all error trappings
under the roof is still technically Win32 bad coding because it fails to
check the *non-error partial writes" when it was NOT expected to happen?

What about virtual buffering? If the device is not set to buffer and
rather use 100% commit, is this still the case?

What you seem to be suggesting is that non-error timeout considerations
should be part of ALL possible synchronous non-timeout prepared WriteFile()
implementations regardless if its was expected or not because it *may* be
NORMAL for the sub-system device to be unavailable under a NON-ERROR
situation.

Would that be a correct paraphrasing?

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Gary Chanson
2005-04-10 02:37:18 UTC
Permalink
Post by Alexander Grigoriev
A serial driver may send less than requested, because of a timeout. This
does not qualify as an error, because you will know exactly how much is
sent.
But doesn't it get reported as a timeout error?
--
-GJC [MS Windows SDK MVP]
-Software Consultant (Embedded systems and Real Time Controls)
- http://www.mvps.org/ArcaneIncantations/consulting.htm
-***@mvps.org
Alexander Grigoriev
2005-04-10 04:49:23 UTC
Permalink
The IRP gets completed with STATUS_TIMEOUT, which is a success code
0x00000102
Post by Gary Chanson
Post by Alexander Grigoriev
A serial driver may send less than requested, because of a timeout. This
does not qualify as an error, because you will know exactly how much is
sent.
But doesn't it get reported as a timeout error?
--
-GJC [MS Windows SDK MVP]
-Software Consultant (Embedded systems and Real Time Controls)
- http://www.mvps.org/ArcaneIncantations/consulting.htm
Chris Burnette
2005-04-10 05:04:22 UTC
Permalink
No it doesn't.

See my other posting about about this, but if you set up a write timeout
using SetCommTimeouts and WriteFile times out, I wouldn't count that as an
error. It did exactly what you've told it to do.

Chris
Post by Gary Chanson
Post by Alexander Grigoriev
A serial driver may send less than requested, because of a timeout. This
does not qualify as an error, because you will know exactly how much is
sent.
But doesn't it get reported as a timeout error?
--
-GJC [MS Windows SDK MVP]
-Software Consultant (Embedded systems and Real Time Controls)
- http://www.mvps.org/ArcaneIncantations/consulting.htm
Frank A. Uepping
2006-06-18 04:06:00 UTC
Permalink
Post by Gary Chanson
Post by Frank A. Uepping
It would also be thinkable that WriteFile() returns true (successful!)
with less bytes written than requested (because this is what the file
system layer was able to process till the error happens), while the
error becomes pending. The next call to the file system layer will then
reveal the error.
It would be "thinkable" but it's not implemented or documented this way.
Someone will have to offer an example of an exception before you've
made your case.
I think this is the only reasonable way!
It would be a very poor design if WriteFile() fails after a partial
write. Think about a situation where a user has an error recovery
strategie implemented where the data just get written to a backup device
(or something) if the other device fails (for what reason ever). For
this to work, the user need to know the exact number of processed bytes
so far. If WriteFile() *embezzles* the number of (partial) written
bytes in the case of an error, an error recovery strategie could not be
implemented reasonable and you risk data duplication.


Thanks
FAU
Hector Santos
2005-04-10 20:21:39 UTC
Permalink
This is implementation specific and if that is what you expect that can
happen, then you program it as such to expect this behavior.

In other words, if you expect interface contention, flow or I/O issues, you
need to design for all the possible issues.

For a device where there is NO exposed WIN32 timeout capabilities, and you
program for 100% blocking situation, then it better work or there is an
error with an extended error set somewhere.

If the provider of the device exposes some attributes that you can set with
DeviceIoControl() or the provider documents as possible, than thats a
different situation and again, it should be part of the design.

I do not expect a non-error TIMEOUT on a harddrive.

I do not expect a non-errror TIMEOUT on a serial device (when I don't use
the timeouts).

I do not expect a non-error TIMEOUT on any device where I expect a 100%
blocked call. If it is possible "under normal conditions" to get a
non-error timeout where a 100% block is expect, then the sub-system will
RISK breaking thousands of applications.

Unless I am completely out of my mind, I don't see how anyone can dispute
this.

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Frank A. Uepping
Post by Gary Chanson
Post by Frank A. Uepping
It would also be thinkable that WriteFile() returns true (successful!)
with less bytes written than requested (because this is what the file
system layer was able to process till the error happens), while the
error becomes pending. The next call to the file system layer will then
reveal the error.
It would be "thinkable" but it's not implemented or documented this way.
Someone will have to offer an example of an exception before you've
made your case.
I think this is the only reasonable way!
It would be a very poor design if WriteFile() fails after a partial
write. Think about a situation where a user has an error recovery
strategie implemented where the data just get written to a backup device
(or something) if the other device fails (for what reason ever). For
this to work, the user need to know the exact number of processed bytes
so far. If WriteFile() *embezzles* the number of (partial) written
bytes in the case of an error, an error recovery strategie could not be
implemented reasonable and you risk data duplication.
Thanks
FAU
Gary Chanson
2006-06-18 04:05:54 UTC
Permalink
Post by Hector Santos
Chris said it was possible.
Unless he knows something unbeknownst to me, I don't think so. I can't
think of any normal condition where this is possible.
I can't think of any either.
--
-GJC [MS Windows SDK MVP]
-Software Consultant (Embedded systems and Real Time Controls)
- http://www.mvps.org/ArcaneIncantations/consulting.htm
-***@mvps.org
Chris Burnette
2005-04-10 05:01:58 UTC
Permalink
The original question was whether WriteFile could ever return TRUE and have
nNumberOfBytesToWrite > *lpNumberOfBytesWritten

The documentation on MSDN for WriteFile clearly states:
When writing to a nonblocking, byte-mode pipe handle with insufficient
buffer space, WriteFile returns TRUE with *lpNumberOfBytesWritten <
nNumberOfBytesToWrite

Futhermore, this can happen if someone call SetCommTimeouts and the serial
port cannot write the data fast enough and times out. Try setting the
WriteTotalTimeoutConstant to 10 and the WriteTotalTimeoutMultiplier to 0 and
then try writing 1MB to a comport. Since it can't send 1MB across that fast,
WriteFile times out and returns the number of bytes it was able to transmit.
In this case, WriteFile still returns a success value (the same thing
happens on a read timeout). From my perspective, this is expected... you've
told it to timeout and it did... no error occurred.

I would also think that the following is a plausible situation.:
You have a disk that has 1MB left on it.
You call WriteFile with a 2MB buffer.
WriteFile returns true with 1MB written.
The next call to WriteFile returns false with a disk full error.

I tried verifying this using a USB key (didn't want to fill up my hard
drive); however, the above situation didn't happen. WriteFile failed (disk
full error) without filling up my key (there was free space left). This
might lead me to believe that when using WriteFile to write to storage
media, it does some additional checking to see if it can write the entire
contents of the buffer before doing it. However, this could be unique to
using the USB key and may depend on the file system, drivers, and operating
system involved.

Typically, I've found that WriteFile will report that it's written the same
number of bytes that you've told it to write. I would think that an
application should not necessarily assume that this is the case; these
return values are in there for a reason.

Chris
Post by Gary Chanson
Post by Hector Santos
Chris said it was possible.
Unless he knows something unbeknownst to me, I don't think so. I can't
think of any normal condition where this is possible.
I can't think of any either.
--
-GJC [MS Windows SDK MVP]
-Software Consultant (Embedded systems and Real Time Controls)
- http://www.mvps.org/ArcaneIncantations/consulting.htm
Hector Santos
2006-06-18 04:06:00 UTC
Permalink
Post by Chris Burnette
Typically, I've found that WriteFile will report that it's written the same
number of bytes that you've told it to write. I would think that an
application should not necessarily assume that this is the case; these
return values are in there for a reason.
Chris, I respectfully disagree.

This is not a fuzzy situation. There is no unknown carbon intelligence
making decisions for us here. It is black and white. There is no gray
area.

If the design calls for a sync behavior with no regards to timeouts, then
your must design the code based on expected behaviors. A sync call to
WriteFile() must return the write amount equal to the request amount.
Otherwise there is an error condition.

If you can't "trust" the result, then there MUST be a reason for the lack of
trust. You can't program this stuff blindly... but then again, thats
probably why bugs exist :-)

Here is another way to look at this:

If you design your I/O with async in mind, then you using IO PENDING logic
to properly synchronize it.

If you design your I/O with sync, if what you say is true, then YOU have no
choice but to have every WriteFile() call in your code changed to use loop
concept watching for no error partial writes.

Unless you specially expect this type of behavior (using serial write
timesout for example), then that is completely redundant.

I'm not saying that isn't ok (every sync writefile replaced with a wrapper
that watches for generic timeout partial writes), but you might as well use
a ASYNC I/O communications design.

But if I have an sync WriteFile() with no timeout parameters set for the
device (if possible), then I expect a block call and a 100% complete write.

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Chris Burnette
2006-06-18 04:06:04 UTC
Permalink
I think one of the problems that I see is that there isn't enough
documentation on the expected behavior of Read/WriteFile. There is no
mention of what success really means under various conditions. It's up to us
to try to figure it out. And then there's always unexpected behaviors due to
hardware malfunctions, etc. Part of this is probably because Read/WriteFile
can be used in so many different ways, fron files, serial ports, pipes, etc.
Heck, the documentation doesn't even mention the various errors that
GetLastError can return. Try looking in the documentation for WriteFile to
see what the return value for GetLastError is when a disk is full. Not even
mentioned.

But, let's apply your argument to ReadFile. Let's assume we're using
ReadFile to read a series of bytes synchronously. According to your
argument, if ReadFile cannot read the requested number of bytes, it should
fail. However, this isn't the case. The documentation for ReadFile states
that when you read to the end of a file, it's normal for ReadFile to return
success with *lpNumRead < nNumToRead. When the end of file is read,
*lpNumRead = 0 and ReadFile returns TRUE.

Here's code taken from MSDN's documentation of ReadFile that demonstrates
how to test for end of file condition:
// Attempt a synchronous read operation.
bResult = ReadFile(hFile, &inBuffer, nBytesToRead, &nBytesRead, NULL) ;
// Check for end of file.
if (bResult && nBytesRead == 0 )
{
// we're at the end of the file
} So, ReadFile doesn't apply the logic of stating that it's a failure
condition when *lpNumRead < nNumToRead in sync situations. It also uses the
same timeout methodology that WriteFile exhibits when using serial port IO.

The point I was trying to make is that it's probably not a good idea to
assume that *lpNumWritten == nNumToWrite when WriteFile returns TRUE in sync
mode (which was the original question). The fact that the documentation
states that this assumption is false to begin with only reinforces my
argument. Should it be the other way around? A good argument could be made
for that, I agree. But nowhere in the documentation does it state this (in
fact, it states that WriteFile can return TRUE when *lpNumWritten <
nNumToWrite under certain circumstances).

I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice for
what it's worth. One basic tenet of programming is to check return values.
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.

Chris Burnette
EOIR Technologies
Post by Hector Santos
Post by Chris Burnette
Typically, I've found that WriteFile will report that it's written the
same
Post by Chris Burnette
number of bytes that you've told it to write. I would think that an
application should not necessarily assume that this is the case; these
return values are in there for a reason.
Chris, I respectfully disagree.
This is not a fuzzy situation. There is no unknown carbon intelligence
making decisions for us here. It is black and white. There is no gray
area.
If the design calls for a sync behavior with no regards to timeouts, then
your must design the code based on expected behaviors. A sync call to
WriteFile() must return the write amount equal to the request amount.
Otherwise there is an error condition.
If you can't "trust" the result, then there MUST be a reason for the lack of
trust. You can't program this stuff blindly... but then again, thats
probably why bugs exist :-)
If you design your I/O with async in mind, then you using IO PENDING logic
to properly synchronize it.
If you design your I/O with sync, if what you say is true, then YOU have no
choice but to have every WriteFile() call in your code changed to use loop
concept watching for no error partial writes.
Unless you specially expect this type of behavior (using serial write
timesout for example), then that is completely redundant.
I'm not saying that isn't ok (every sync writefile replaced with a wrapper
that watches for generic timeout partial writes), but you might as well use
a ASYNC I/O communications design.
But if I have an sync WriteFile() with no timeout parameters set for the
device (if possible), then I expect a block call and a 100% complete write.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Hector Santos
2005-04-11 10:22:53 UTC
Permalink
Post by Chris Burnette
...
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice for
what it's worth. One basic tenet of programming is to check return values.
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris,

I'm glad you not designing my products <g> MSDN serves as a reference. It
does not replace the 28+ years of a rich software engineering background
especially in the Telecommunications market (note that is before Windows!)

I think you have some misunderstanding on decades old fundamental FILE I/O
and communications principles and I think you might have read the docs
incorrectly on this. The ReadFile() analogy was poor and finally, if these
unexpected possibilities did exist, then thouands of applications based on
standard RTL streaming and low-level file handling functions would
fundamentally break down. They are inherently synchronous and their
implementation is based on fundamental blocking I/O operations.

For a non-timeout prepared sync device, whether its read or write, the
request is BLOCKED until the I/O is completed. For a READ, there is only
one condition for a partial read - not enough bytes available (EOF). This
is not the same thing as Writing where writing X is based on having the
EXPECTED space available to write.

Again, when a timeout concept is NOT part of the design, a BLOCK call can
only behave one way - BLOCKED until it completes the request. This is the
essense of asynchronous vs sychronous I/O operations. There is no bending
of the rules. Its one way only. Anything else that happens is a FLAW or and
ERROR.

Therefore, when there is NO timeout conditions, a blocking WriteFile() call
only return with 1 of 2 possible design expectations:

- TRUE with request == written
- FALSE with and extended error code set.

If a TIMEOUT does occur here with a TRUE result and no error, then we have a
DESIGN FLAW in the sub-system. That function should NEVER return until it
finishes or an error occurs.


--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Chris Burnette
I think one of the problems that I see is that there isn't enough
documentation on the expected behavior of Read/WriteFile. There is no
mention of what success really means under various conditions. It's up to us
to try to figure it out. And then there's always unexpected behaviors due to
hardware malfunctions, etc. Part of this is probably because
Read/WriteFile
Post by Chris Burnette
can be used in so many different ways, fron files, serial ports, pipes, etc.
Heck, the documentation doesn't even mention the various errors that
GetLastError can return. Try looking in the documentation for WriteFile to
see what the return value for GetLastError is when a disk is full. Not even
mentioned.
But, let's apply your argument to ReadFile. Let's assume we're using
ReadFile to read a series of bytes synchronously. According to your
argument, if ReadFile cannot read the requested number of bytes, it should
fail. However, this isn't the case. The documentation for ReadFile states
that when you read to the end of a file, it's normal for ReadFile to return
success with *lpNumRead < nNumToRead. When the end of file is read,
*lpNumRead = 0 and ReadFile returns TRUE.
Here's code taken from MSDN's documentation of ReadFile that demonstrates
// Attempt a synchronous read operation.
bResult = ReadFile(hFile, &inBuffer, nBytesToRead, &nBytesRead, NULL) ;
// Check for end of file.
if (bResult && nBytesRead == 0 )
{
// we're at the end of the file
} So, ReadFile doesn't apply the logic of stating that it's a failure
condition when *lpNumRead < nNumToRead in sync situations. It also uses the
same timeout methodology that WriteFile exhibits when using serial port IO.
The point I was trying to make is that it's probably not a good idea to
assume that *lpNumWritten == nNumToWrite when WriteFile returns TRUE in sync
mode (which was the original question). The fact that the documentation
states that this assumption is false to begin with only reinforces my
argument. Should it be the other way around? A good argument could be made
for that, I agree. But nowhere in the documentation does it state this (in
fact, it states that WriteFile can return TRUE when *lpNumWritten <
nNumToWrite under certain circumstances).
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice for
what it's worth. One basic tenet of programming is to check return values.
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris Burnette
EOIR Technologies
Post by Hector Santos
Post by Chris Burnette
Typically, I've found that WriteFile will report that it's written the
same
Post by Chris Burnette
number of bytes that you've told it to write. I would think that an
application should not necessarily assume that this is the case; these
return values are in there for a reason.
Chris, I respectfully disagree.
This is not a fuzzy situation. There is no unknown carbon intelligence
making decisions for us here. It is black and white. There is no gray
area.
If the design calls for a sync behavior with no regards to timeouts, then
your must design the code based on expected behaviors. A sync call to
WriteFile() must return the write amount equal to the request amount.
Otherwise there is an error condition.
If you can't "trust" the result, then there MUST be a reason for the
lack
Post by Chris Burnette
Post by Hector Santos
of
trust. You can't program this stuff blindly... but then again, thats
probably why bugs exist :-)
If you design your I/O with async in mind, then you using IO PENDING logic
to properly synchronize it.
If you design your I/O with sync, if what you say is true, then YOU have no
choice but to have every WriteFile() call in your code changed to use loop
concept watching for no error partial writes.
Unless you specially expect this type of behavior (using serial write
timesout for example), then that is completely redundant.
I'm not saying that isn't ok (every sync writefile replaced with a wrapper
that watches for generic timeout partial writes), but you might as well use
a ASYNC I/O communications design.
But if I have an sync WriteFile() with no timeout parameters set for the
device (if possible), then I expect a block call and a 100% complete write.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Chris Burnette
2006-06-18 04:06:09 UTC
Permalink
I think you're missing the point.

The point is that while expecting a blocking IO call to WriteFile to return
success and having all of the bytes written is a mistake (at least under
certain conditions). There are documented examples of this not being the
case (whether it be through communications timeouts or writing to a
non-blocking byte mode pipe handle).

Is this good? Probably not. I agree with you that one maybe should expect
WriteFile to return an error if not all of the bytes are written under all
conditions. However, for better or worse, this is not the way it is.

The original poster wanted to know if it was possible for WriteFile to
return true when not all of the bytes were written. Some responders couldn't
think of a condition. I supplied a couple of examples, one from the
WriteFile documentation using non-blocking byte-mode pipes and one using
comm timeouts. Are there other conditions that this might be the case? I
don't know. One would hope not.

Chris
Post by Chris Burnette
Post by Chris Burnette
...
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice
for
Post by Chris Burnette
what it's worth. One basic tenet of programming is to check return
values.
Post by Chris Burnette
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris,
I'm glad you not designing my products <g> MSDN serves as a reference. It
does not replace the 28+ years of a rich software engineering background
especially in the Telecommunications market (note that is before Windows!)
I think you have some misunderstanding on decades old fundamental FILE I/O
and communications principles and I think you might have read the docs
incorrectly on this. The ReadFile() analogy was poor and finally, if these
unexpected possibilities did exist, then thouands of applications based on
standard RTL streaming and low-level file handling functions would
fundamentally break down. They are inherently synchronous and their
implementation is based on fundamental blocking I/O operations.
For a non-timeout prepared sync device, whether its read or write, the
request is BLOCKED until the I/O is completed. For a READ, there is only
one condition for a partial read - not enough bytes available (EOF). This
is not the same thing as Writing where writing X is based on having the
EXPECTED space available to write.
Again, when a timeout concept is NOT part of the design, a BLOCK call can
only behave one way - BLOCKED until it completes the request. This is the
essense of asynchronous vs sychronous I/O operations. There is no bending
of the rules. Its one way only. Anything else that happens is a FLAW or and
ERROR.
Therefore, when there is NO timeout conditions, a blocking WriteFile() call
- TRUE with request == written
- FALSE with and extended error code set.
If a TIMEOUT does occur here with a TRUE result and no error, then we have a
DESIGN FLAW in the sub-system. That function should NEVER return until it
finishes or an error occurs.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Chris Burnette
I think one of the problems that I see is that there isn't enough
documentation on the expected behavior of Read/WriteFile. There is no
mention of what success really means under various conditions. It's up to
us
Post by Chris Burnette
to try to figure it out. And then there's always unexpected behaviors due
to
Post by Chris Burnette
hardware malfunctions, etc. Part of this is probably because
Read/WriteFile
Post by Chris Burnette
can be used in so many different ways, fron files, serial ports, pipes,
etc.
Post by Chris Burnette
Heck, the documentation doesn't even mention the various errors that
GetLastError can return. Try looking in the documentation for WriteFile to
see what the return value for GetLastError is when a disk is full. Not
even
Post by Chris Burnette
mentioned.
But, let's apply your argument to ReadFile. Let's assume we're using
ReadFile to read a series of bytes synchronously. According to your
argument, if ReadFile cannot read the requested number of bytes, it should
fail. However, this isn't the case. The documentation for ReadFile states
that when you read to the end of a file, it's normal for ReadFile to
return
Post by Chris Burnette
success with *lpNumRead < nNumToRead. When the end of file is read,
*lpNumRead = 0 and ReadFile returns TRUE.
Here's code taken from MSDN's documentation of ReadFile that demonstrates
// Attempt a synchronous read operation.
bResult = ReadFile(hFile, &inBuffer, nBytesToRead, &nBytesRead, NULL) ;
// Check for end of file.
if (bResult && nBytesRead == 0 )
{
// we're at the end of the file
} So, ReadFile doesn't apply the logic of stating that it's a failure
condition when *lpNumRead < nNumToRead in sync situations. It also uses
the
Post by Chris Burnette
same timeout methodology that WriteFile exhibits when using serial port
IO.
Post by Chris Burnette
The point I was trying to make is that it's probably not a good idea to
assume that *lpNumWritten == nNumToWrite when WriteFile returns TRUE in
sync
Post by Chris Burnette
mode (which was the original question). The fact that the documentation
states that this assumption is false to begin with only reinforces my
argument. Should it be the other way around? A good argument could be made
for that, I agree. But nowhere in the documentation does it state this (in
fact, it states that WriteFile can return TRUE when *lpNumWritten <
nNumToWrite under certain circumstances).
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice
for
Post by Chris Burnette
what it's worth. One basic tenet of programming is to check return
values.
Post by Chris Burnette
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris Burnette
EOIR Technologies
Post by Hector Santos
Post by Chris Burnette
Typically, I've found that WriteFile will report that it's written the
same
Post by Chris Burnette
number of bytes that you've told it to write. I would think that an
application should not necessarily assume that this is the case; these
return values are in there for a reason.
Chris, I respectfully disagree.
This is not a fuzzy situation. There is no unknown carbon intelligence
making decisions for us here. It is black and white. There is no gray
area.
If the design calls for a sync behavior with no regards to timeouts,
then
Post by Chris Burnette
Post by Hector Santos
your must design the code based on expected behaviors. A sync call to
WriteFile() must return the write amount equal to the request amount.
Otherwise there is an error condition.
If you can't "trust" the result, then there MUST be a reason for the
lack
Post by Chris Burnette
Post by Hector Santos
of
trust. You can't program this stuff blindly... but then again, thats
probably why bugs exist :-)
If you design your I/O with async in mind, then you using IO PENDING
logic
Post by Chris Burnette
Post by Hector Santos
to properly synchronize it.
If you design your I/O with sync, if what you say is true, then YOU
have
no
choice but to have every WriteFile() call in your code changed to use
loop
Post by Chris Burnette
Post by Hector Santos
concept watching for no error partial writes.
Unless you specially expect this type of behavior (using serial write
timesout for example), then that is completely redundant.
I'm not saying that isn't ok (every sync writefile replaced with a
wrapper
Post by Chris Burnette
Post by Hector Santos
that watches for generic timeout partial writes), but you might as
well
use
a ASYNC I/O communications design.
But if I have an sync WriteFile() with no timeout parameters set for the
device (if possible), then I expect a block call and a 100% complete write.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Hector Santos
2006-06-18 04:06:11 UTC
Permalink
Post by Chris Burnette
I think you're missing the point.
No I did not. You have no point actually.
Post by Chris Burnette
The point is that while expecting a blocking IO call to WriteFile to return
success and having all of the bytes written is a mistake (at least under
certain conditions).
No it is not a mistake. It is expected to behave this way.
Post by Chris Burnette
There are documented examples of this not being the
case (whether it be through communications timeouts or writing to a
non-blocking byte mode pipe handle).
And these are well known condition where the possibility is within the
expectations of the design. Don't you get it?
Post by Chris Burnette
Is this good? Probably not. I agree with you that one maybe should expect
WriteFile to return an error if not all of the bytes are written under all
conditions. However, for better or worse, this is not the way it is.
Then you missed the point and that is what it is and that is what is
expected.

In other words, if you don't have a flushing algorythm in place, then it is
a design consideration and criteria to expect a one shot deal - anything
else is a ERROR in the design. That is does not say that the ERROR can
exist - but it does say it is a FLAW somewhere.
Post by Chris Burnette
The original poster wanted to know if it was possible for WriteFile to
return true when not all of the bytes were written.
and he specifically said "synchronous" mode, so the answer is it will always
return request=written, unless there is an ERROR or an expected design
condition.
Post by Chris Burnette
Some responders couldn't
think of a condition. I supplied a couple of examples, one from the
WriteFile documentation using non-blocking byte-mode pipes and one using
comm timeouts. Are there other conditions that this might be the case? I
don't know. One would hope not.
But you were both where in DIFFERENT DESIGN thinking - where there is a
possibility.

NON-BLOCKING is not a BLOCKING CALL
USING TIMEOUT is a poor man's "async" concept.

This is not synchronous. There is no other condition for a BLOCK call but
an error or unknown device flaw.

Again, the RTL file handling functions is designed to use the ReadFile(),
WriteFile(). If the world was designed these idiotic unknowns, then we
would be in a heap of trouble.

You say "One would hope not."

Well, your design determines your ground rules and these rules are based on
a sound FILE I/O principles that goes back since day one.

If you design your WriteFile() with the expectation of a 1 shot call, then
there is nothing wrong with that. If there is a problem. That has nothing
to do with Win32, but shows there is a BUG or problem or behavior that is
completely unexpected - i..e, NOT NORMAL.

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Alexander Grigoriev
2006-06-18 04:06:11 UTC
Permalink
BTW, whether the I/O is synchronous or asynchronous doesn't change anything.

We only consider the Ifinal /O completion result, not intermediate
ERROR_IO_PENDING condition. The final result obeys the same rules for both
synch/asynch cases.

With a direct-access storage device file, one can expect that WriteFile
either writes everything, or doesn't write at all and returns an error. But
if your program may perform I/O on a generic file handle, you need to be
aware of incomplete writes.

There is still possibility for DASD file I/O to write less than requested.
WriteFile first atomically extends the file, then writes data. If during the
write, another program shrinks the file back, I would speculate that
WriteFile may write incomplete buffer. It requires more experiments.
Post by Chris Burnette
Post by Chris Burnette
...
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice
for
Post by Chris Burnette
what it's worth. One basic tenet of programming is to check return
values.
Post by Chris Burnette
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris,
I'm glad you not designing my products <g> MSDN serves as a reference. It
does not replace the 28+ years of a rich software engineering background
especially in the Telecommunications market (note that is before Windows!)
I think you have some misunderstanding on decades old fundamental FILE I/O
and communications principles and I think you might have read the docs
incorrectly on this. The ReadFile() analogy was poor and finally, if these
unexpected possibilities did exist, then thouands of applications based on
standard RTL streaming and low-level file handling functions would
fundamentally break down. They are inherently synchronous and their
implementation is based on fundamental blocking I/O operations.
For a non-timeout prepared sync device, whether its read or write, the
request is BLOCKED until the I/O is completed. For a READ, there is only
one condition for a partial read - not enough bytes available (EOF). This
is not the same thing as Writing where writing X is based on having the
EXPECTED space available to write.
Again, when a timeout concept is NOT part of the design, a BLOCK call can
only behave one way - BLOCKED until it completes the request. This is the
essense of asynchronous vs sychronous I/O operations. There is no bending
of the rules. Its one way only. Anything else that happens is a FLAW or and
ERROR.
Therefore, when there is NO timeout conditions, a blocking WriteFile() call
- TRUE with request == written
- FALSE with and extended error code set.
If a TIMEOUT does occur here with a TRUE result and no error, then we have a
DESIGN FLAW in the sub-system. That function should NEVER return until it
finishes or an error occurs.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Chris Burnette
I think one of the problems that I see is that there isn't enough
documentation on the expected behavior of Read/WriteFile. There is no
mention of what success really means under various conditions. It's up to
us
Post by Chris Burnette
to try to figure it out. And then there's always unexpected behaviors due
to
Post by Chris Burnette
hardware malfunctions, etc. Part of this is probably because
Read/WriteFile
Post by Chris Burnette
can be used in so many different ways, fron files, serial ports, pipes,
etc.
Post by Chris Burnette
Heck, the documentation doesn't even mention the various errors that
GetLastError can return. Try looking in the documentation for WriteFile to
see what the return value for GetLastError is when a disk is full. Not
even
Post by Chris Burnette
mentioned.
But, let's apply your argument to ReadFile. Let's assume we're using
ReadFile to read a series of bytes synchronously. According to your
argument, if ReadFile cannot read the requested number of bytes, it should
fail. However, this isn't the case. The documentation for ReadFile states
that when you read to the end of a file, it's normal for ReadFile to
return
Post by Chris Burnette
success with *lpNumRead < nNumToRead. When the end of file is read,
*lpNumRead = 0 and ReadFile returns TRUE.
Here's code taken from MSDN's documentation of ReadFile that demonstrates
// Attempt a synchronous read operation.
bResult = ReadFile(hFile, &inBuffer, nBytesToRead, &nBytesRead, NULL) ;
// Check for end of file.
if (bResult && nBytesRead == 0 )
{
// we're at the end of the file
} So, ReadFile doesn't apply the logic of stating that it's a failure
condition when *lpNumRead < nNumToRead in sync situations. It also uses
the
Post by Chris Burnette
same timeout methodology that WriteFile exhibits when using serial port
IO.
Post by Chris Burnette
The point I was trying to make is that it's probably not a good idea to
assume that *lpNumWritten == nNumToWrite when WriteFile returns TRUE in
sync
Post by Chris Burnette
mode (which was the original question). The fact that the documentation
states that this assumption is false to begin with only reinforces my
argument. Should it be the other way around? A good argument could be made
for that, I agree. But nowhere in the documentation does it state this (in
fact, it states that WriteFile can return TRUE when *lpNumWritten <
nNumToWrite under certain circumstances).
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice
for
Post by Chris Burnette
what it's worth. One basic tenet of programming is to check return
values.
Post by Chris Burnette
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris Burnette
EOIR Technologies
Post by Hector Santos
Post by Chris Burnette
Typically, I've found that WriteFile will report that it's written the
same
Post by Chris Burnette
number of bytes that you've told it to write. I would think that an
application should not necessarily assume that this is the case; these
return values are in there for a reason.
Chris, I respectfully disagree.
This is not a fuzzy situation. There is no unknown carbon intelligence
making decisions for us here. It is black and white. There is no gray
area.
If the design calls for a sync behavior with no regards to timeouts,
then
Post by Chris Burnette
Post by Hector Santos
your must design the code based on expected behaviors. A sync call to
WriteFile() must return the write amount equal to the request amount.
Otherwise there is an error condition.
If you can't "trust" the result, then there MUST be a reason for the
lack
Post by Chris Burnette
Post by Hector Santos
of
trust. You can't program this stuff blindly... but then again, thats
probably why bugs exist :-)
If you design your I/O with async in mind, then you using IO PENDING
logic
Post by Chris Burnette
Post by Hector Santos
to properly synchronize it.
If you design your I/O with sync, if what you say is true, then YOU
have
no
choice but to have every WriteFile() call in your code changed to use
loop
Post by Chris Burnette
Post by Hector Santos
concept watching for no error partial writes.
Unless you specially expect this type of behavior (using serial write
timesout for example), then that is completely redundant.
I'm not saying that isn't ok (every sync writefile replaced with a
wrapper
Post by Chris Burnette
Post by Hector Santos
that watches for generic timeout partial writes), but you might as
well
use
a ASYNC I/O communications design.
But if I have an sync WriteFile() with no timeout parameters set for the
device (if possible), then I expect a block call and a 100% complete write.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Hector Santos
2005-04-11 17:34:42 UTC
Permalink
Post by Alexander Grigoriev
BTW, whether the I/O is synchronous or asynchronous doesn't change anything.
It changes everything at the Win32 design consideration level.
Post by Alexander Grigoriev
We only consider the Ifinal /O completion result, not intermediate
ERROR_IO_PENDING condition. The final result obeys the same rules for both
synch/asynch cases.
Sorry, if I expect non-error no timeout conditions, then your device handler
better model and fit the upper layer expectations. Otherwise, you better
return an error.
Post by Alexander Grigoriev
With a direct-access storage device file, one can expect that WriteFile
either writes everything, or doesn't write at all and returns an error. But
if your program may perform I/O on a generic file handle, you need to be
aware of incomplete writes.
Exactly. A generic file handle can have different behaviors and this is
what you expect thus you will consider incomplete writes in your design.

But if one or more device is prepared and expected not to have non-timeout
considerations then the call is blocked. If it should behave differently
unless there is error or flaw in the design somewhere.
Post by Alexander Grigoriev
There is still possibility for DASD file I/O to write less than requested.
WriteFile first atomically extends the file, then writes data. If during the
write, another program shrinks the file back, I would speculate that
WriteFile may write incomplete buffer. It requires more experiments.
Again, this is part of your design considerations. If you expect to have
contention issues, then thats a different design issue.

We are talking about a pure black box concept of a 100% blocked WriteFile
call with a device that is 100% expected and not programmed to have
timeouts.

The OP asked:

Can we always assume that when WriteFile() (in synchronous operation
mode) returns successfully it has written the requested number of bytes,
i.e. nNumberOfBytesToWrite == *lpNumberOfBytesWritten?

Note he said, "synchronous operation" which implies block with no timeouts.

The answer is YES 100% of the time

If timeouts are expected, then than answer is obviously no - by design.

I hope you are not writing device drivers that is breaking this fundamental
file I/O rules? <g>

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Slava M. Usov
2006-06-18 04:06:14 UTC
Permalink
"Hector Santos" <***@nospamhere.com> wrote in message news:***@TK2MSFTNGP09.phx.gbl...

[...]
Post by Hector Santos
Note he said, "synchronous operation" which implies block with no timeouts.
I do not understand how this implication comes about. Synchronous IO means
exactly one thing: the caller will not be able to do anything until IO
terminates, successfully, unsuccessfully, or semi-successfully. Or
quasi-successfully, if you will. WriteFile() is a generic routine and it
makes no assumptions as to how 'successfully', 'semi-successfully' and
'quasi-successfully' can be differentiated. It simply returns three things
to the caller: general success/failure status, size of the successful
transfer, and error code. It can do that synchronously or asynchronously.
The rest depends on the medium involved. The caller always knows the
character of the medium, and should interpret those three things properly.

Speaking of the medium, much depends on tradition, as you say. Files are
traditionally handled without any timeouts. Even though there are actually
timeouts as you go down to metal, they will only trigger re-tries and
eventually failures; timeouts and 'success' are incompatible for files,
async or not.

Communications, again traditionally, are built around timeouts. A timeout
may trigger either 'success' or 'failure', but it takes more than binary
logic to handle timeouts anyway, so what is actually returned as 'status' is
only a matter of convention; the important data are the 'size of successful
transfer' and 'error code'. With async IO, one can easily do without
built-in timeouts, because the caller can always cancel IO. With sync IO,
the caller cannot cancel IO, nor can anyone else; the only way to get rid of
stuck IO is to kill blocked threads, which leaks resources. The latter means
that sync IO becomes impractical without timeouts.

S
Hector Santos
2005-04-11 20:14:09 UTC
Permalink
I don't know about you, traditionally, across every language, hardware,
platform, etc, Synchronous 100% meant a blocked call. Asynchronous mean
non-blocking augmented with some other signaling concept, be it event,
callback or message based. This is now Windows specific. These fundamental
I/O and communications principles is the basically same across all systems
and hardware and I based that on nearly 30 years experience, 20+ languages,
15+ platforms from micros, minis, hybrids and mainframes.

That has nothing to do with the fact whether there is a timeout
consideration. It may part of the equation or not.

Maybe we should use the terms Blocking vs. Non-Blocking instead.

For a blocking WriteFile() when *no timeout* is expected or the device is
prepared for *no timeout* behavior, the call is 100% blocked and there are
only two possible result:

- Success where request = written
- Error

Anything else is a unexpected design framework.

If a timeout is known to exist, then of course, it must be part of the
application design.

So in my opinion, the OP's answer is:

YES for 100% blocking design
NO for non-blocking designs.

I don't know how much simple it can get or why this is even a question. The
deviation from this fundamental rule means CHAOS in application design. I
mean, what's the point of even thinking about "blocking" if a WriteFile()
can always return unexpectedly? It means a fundamental change across the
board for thousands if not millions of applications.

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Slava M. Usov
[...]
Post by Hector Santos
Note he said, "synchronous operation" which implies block with no timeouts.
I do not understand how this implication comes about. Synchronous IO means
exactly one thing: the caller will not be able to do anything until IO
terminates, successfully, unsuccessfully, or semi-successfully. Or
quasi-successfully, if you will. WriteFile() is a generic routine and it
makes no assumptions as to how 'successfully', 'semi-successfully' and
'quasi-successfully' can be differentiated. It simply returns three things
to the caller: general success/failure status, size of the successful
transfer, and error code. It can do that synchronously or asynchronously.
The rest depends on the medium involved. The caller always knows the
character of the medium, and should interpret those three things properly.
Speaking of the medium, much depends on tradition, as you say. Files are
traditionally handled without any timeouts. Even though there are actually
timeouts as you go down to metal, they will only trigger re-tries and
eventually failures; timeouts and 'success' are incompatible for files,
async or not.
Communications, again traditionally, are built around timeouts. A timeout
may trigger either 'success' or 'failure', but it takes more than binary
logic to handle timeouts anyway, so what is actually returned as 'status' is
only a matter of convention; the important data are the 'size of successful
transfer' and 'error code'. With async IO, one can easily do without
built-in timeouts, because the caller can always cancel IO. With sync IO,
the caller cannot cancel IO, nor can anyone else; the only way to get rid of
stuck IO is to kill blocked threads, which leaks resources. The latter means
that sync IO becomes impractical without timeouts.
S
Alexander Grigoriev
2005-04-12 05:32:57 UTC
Permalink
Blocking vs non-blocking I/O doesn't change anything in Windows I/O manager.
Error codes are the same, no matter what kind of I/O is used. Some I/O
operations are always blocking, no matter how the handle is opened. Such
operations include file expansion, for example. Some IOCTLs may be
inherently synchronous.

I/O timeout is not an application concept, it's a driver concept. It can
happen with both overlapped and non-overlapped I/O request, and application
should NOT care about it (other than setting COMM timeouts), NOR a driver
knows what kind of I/O request is issued by an app.

The only thing different about overlapped I/O is that you can cancel such
operations for a given thread (you can cancel all outstanding
Read/Write/Ioctl for a comm device, but for all threads).
Post by Hector Santos
I don't know about you, traditionally, across every language, hardware,
platform, etc, Synchronous 100% meant a blocked call. Asynchronous mean
non-blocking augmented with some other signaling concept, be it event,
callback or message based. This is now Windows specific. These fundamental
I/O and communications principles is the basically same across all systems
and hardware and I based that on nearly 30 years experience, 20+ languages,
15+ platforms from micros, minis, hybrids and mainframes.
That has nothing to do with the fact whether there is a timeout
consideration. It may part of the equation or not.
Maybe we should use the terms Blocking vs. Non-Blocking instead.
For a blocking WriteFile() when *no timeout* is expected or the device is
prepared for *no timeout* behavior, the call is 100% blocked and there are
- Success where request = written
- Error
Anything else is a unexpected design framework.
If a timeout is known to exist, then of course, it must be part of the
application design.
YES for 100% blocking design
NO for non-blocking designs.
I don't know how much simple it can get or why this is even a question.
The
deviation from this fundamental rule means CHAOS in application design. I
mean, what's the point of even thinking about "blocking" if a WriteFile()
can always return unexpectedly? It means a fundamental change across the
board for thousands if not millions of applications.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Slava M. Usov
[...]
Post by Hector Santos
Note he said, "synchronous operation" which implies block with no timeouts.
I do not understand how this implication comes about. Synchronous IO means
exactly one thing: the caller will not be able to do anything until IO
terminates, successfully, unsuccessfully, or semi-successfully. Or
quasi-successfully, if you will. WriteFile() is a generic routine and it
makes no assumptions as to how 'successfully', 'semi-successfully' and
'quasi-successfully' can be differentiated. It simply returns three things
to the caller: general success/failure status, size of the successful
transfer, and error code. It can do that synchronously or asynchronously.
The rest depends on the medium involved. The caller always knows the
character of the medium, and should interpret those three things properly.
Speaking of the medium, much depends on tradition, as you say. Files are
traditionally handled without any timeouts. Even though there are actually
timeouts as you go down to metal, they will only trigger re-tries and
eventually failures; timeouts and 'success' are incompatible for files,
async or not.
Communications, again traditionally, are built around timeouts. A timeout
may trigger either 'success' or 'failure', but it takes more than binary
logic to handle timeouts anyway, so what is actually returned as 'status'
is
Post by Slava M. Usov
only a matter of convention; the important data are the 'size of
successful
Post by Slava M. Usov
transfer' and 'error code'. With async IO, one can easily do without
built-in timeouts, because the caller can always cancel IO. With sync IO,
the caller cannot cancel IO, nor can anyone else; the only way to get rid
of
Post by Slava M. Usov
stuck IO is to kill blocked threads, which leaks resources. The latter
means
Post by Slava M. Usov
that sync IO becomes impractical without timeouts.
S
Slava M. Usov
2005-04-12 11:47:26 UTC
Permalink
"Hector Santos" <***@nospamhere.com> wrote in message news:***@TK2MSFTNGP12.phx.gbl...

[...]
Post by Hector Santos
Synchronous 100% meant a blocked call.
This is what I said.

[...]
Post by Hector Santos
That has nothing to do with the fact whether there is a timeout
consideration. It may part of the equation or not.
This is what I said, too.

These two statements contradict with your original statement that
'"synchronous operation" [...] implies block with no timeouts'.

[...]
Post by Hector Santos
For a blocking WriteFile() when *no timeout* is expected or the device
is prepared for *no timeout* behavior, the call is 100% blocked and there
- Success where request = written
- Error
Anything else is a unexpected design framework.
'request = written' can be binary if only one byte is involved. If there are
more data, it depends on the medium. For serial ports and sockets, to name
just two, it can be less than the entire data size. This is not Windows
specific.

[...]
Post by Hector Santos
I don't know how much simple it can get or why this is even a question.
It is not. The concept is much older than WriteFile(). Have a look at
_write(). As far as I can tell, it has not changed much since 1972 when what
later became known as the C Standard Library was created. And it may also
complete _successfully_ upon writing _less_ than requested.

S
Hector Santos
2005-04-12 20:50:23 UTC
Permalink
Post by Slava M. Usov
These two statements contradict with your original statement that
'"synchronous operation" [...] implies block with no timeouts'.
I don't think so. I think I have been very consistent. Synchronous means
BLOCK call with no timeouts. A block call with timeout considerations is in
effect a "poor man's async" concept. In effect, it simulates concurrency.
Post by Slava M. Usov
'request = written' can be binary if only one byte is involved. If there are
more data, it depends on the medium. For serial ports and sockets, to name
just two, it can be less than the entire data size. This is not Windows
specific.
But again, and again and again, if you prepare the sync device, including
serial and sockets, for no timeouts, you will get a BLOCK.

On a READ, you will block 100%

On a WRITE, you will get SUCCESS or ERROR

Where is the timeout if I turned it off?

If I say READ and WRITE X bytes why will the system ignore the request
without an error?
Post by Slava M. Usov
Post by Hector Santos
I don't know how much simple it can get or why this is even a question.
It is not. The concept is much older than WriteFile(). Have a look at
_write(). As far as I can tell, it has not changed much since 1972 when what
later became known as the C Standard Library was created. And it may also
complete _successfully_ upon writing _less_ than requested.
It depends on the platform. _write() will write what it can by the very
nature and purity of the function but depending on the platform and/or
device it will return an error.

Again, is is how you apply the design. You have to know the enviroment. A
write is a write. Under DOS, it will write what it can with the remaining
space, but FAIL if there is no space. Under Windows, _write uses
WriteFile(). It will ALWAYS return an error for a sync, no timeout device.

Just read with Windows _write docs says()

_write()

Return Value
If successful, _write returns the number of bytes actually written. If the
actual space remaining on the disk is less than the size of the buffer the
function is trying to write to the disk, _write fails and does not flush any
of the buffer’s contents to the disk. A return value of –1 indicates an
error. In this case, errno is set to one of two values: EBADF, which means
the file handle is invalid or the file is not opened for writing, or ENOSPC,
which means there is not enough space left on the device for the operation.

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Slava M. Usov
2006-06-14 04:13:04 UTC
Permalink
Post by Hector Santos
Post by Slava M. Usov
These two statements contradict with your original statement that
'"synchronous operation" [...] implies block with no timeouts'.
I don't think so. I think I have been very consistent.
Unfortunately, no. You said: "That [synchronous vs asynchronous] has nothing
to do with the fact whether there is a timeout consideration. It may part
of the equation or not."
Post by Hector Santos
Synchronous means BLOCK call with no timeouts.
No. Drop the timeouts part.
Post by Hector Santos
A block call with timeout considerations is in effect a "poor man's async"
concept.
It is. But with a tradition spanning decades. It is not something "invented
by MSFT".

[...]
Post by Hector Santos
But again, and again and again, if you prepare the sync device, including
serial and sockets, for no timeouts, you will get a BLOCK.
You will get just the same with some timeouts.
Post by Hector Santos
On a READ, you will block 100%
On a WRITE, you will get SUCCESS or ERROR
Where is the timeout if I turned it off?
If I say READ and WRITE X bytes why will the system ignore the request
without an error?
The system does not define error conditions. Reading less than x bytes from
a file because the file is shorter than that is not an error; this is a
feature of the file system [ = medium]. Writing less than x bytes to a
socket because the peer socket has stopped receiving is not an error; this
is a feature of the sockets [ = medium]. Timeouts make that only slightly
more complex.

I do not understand what you're trying to say now. I may be missing the
whole point. I thought, originally, that you were claiming WriteFile() was
not behaving "properly", because it could return, successfully, without
writing as much as requested. I maintain this behavior is not improper
simply because there are old and well-known IO primitives that behave
exactly in the same way.

S
Hector Santos
2005-04-13 16:55:28 UTC
Permalink
I am not going to get into semantics with you.. This thread has gone far
enough.

I stand by what I say and I will continue to base the high quality
engineering of my software on it. :-)

Thanks for the NIC (non-interactive chat) <g>

Later

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Slava M. Usov
Post by Hector Santos
Post by Slava M. Usov
These two statements contradict with your original statement that
'"synchronous operation" [...] implies block with no timeouts'.
I don't think so. I think I have been very consistent.
Unfortunately, no. You said: "That [synchronous vs asynchronous] has nothing
to do with the fact whether there is a timeout consideration. It may part
of the equation or not."
Post by Hector Santos
Synchronous means BLOCK call with no timeouts.
No. Drop the timeouts part.
Post by Hector Santos
A block call with timeout considerations is in effect a "poor man's async"
concept.
It is. But with a tradition spanning decades. It is not something "invented
by MSFT".
[...]
Post by Hector Santos
But again, and again and again, if you prepare the sync device, including
serial and sockets, for no timeouts, you will get a BLOCK.
You will get just the same with some timeouts.
Post by Hector Santos
On a READ, you will block 100%
On a WRITE, you will get SUCCESS or ERROR
Where is the timeout if I turned it off?
If I say READ and WRITE X bytes why will the system ignore the request
without an error?
The system does not define error conditions. Reading less than x bytes from
a file because the file is shorter than that is not an error; this is a
feature of the file system [ = medium]. Writing less than x bytes to a
socket because the peer socket has stopped receiving is not an error; this
is a feature of the sockets [ = medium]. Timeouts make that only slightly
more complex.
I do not understand what you're trying to say now. I may be missing the
whole point. I thought, originally, that you were claiming WriteFile() was
not behaving "properly", because it could return, successfully, without
writing as much as requested. I maintain this behavior is not improper
simply because there are old and well-known IO primitives that behave
exactly in the same way.
S
m
2005-04-12 21:03:55 UTC
Permalink
There is a subtle complication here:



For regular IO, WriteFile can complete successfully with less than the
requested number of bytes written.

If the Handle is opened with the FILE_FLAG_OVERLAPPED flag, then the IO
cannon partially complete regardless of whether or not the IO is blocking or
not.



Consider two threads that write to overlapping sections of a file. In order
to ensure sequential consistency, the first IOP MUST complete totally before
the seccond hits the disk.
Post by Slava M. Usov
[...]
Post by Hector Santos
Synchronous 100% meant a blocked call.
This is what I said.
[...]
Post by Hector Santos
That has nothing to do with the fact whether there is a timeout
consideration. It may part of the equation or not.
This is what I said, too.
These two statements contradict with your original statement that
'"synchronous operation" [...] implies block with no timeouts'.
[...]
Post by Hector Santos
For a blocking WriteFile() when *no timeout* is expected or the device
is prepared for *no timeout* behavior, the call is 100% blocked and there
- Success where request = written
- Error
Anything else is a unexpected design framework.
'request = written' can be binary if only one byte is involved. If there are
more data, it depends on the medium. For serial ports and sockets, to name
just two, it can be less than the entire data size. This is not Windows
specific.
[...]
Post by Hector Santos
I don't know how much simple it can get or why this is even a question.
It is not. The concept is much older than WriteFile(). Have a look at
_write(). As far as I can tell, it has not changed much since 1972 when what
later became known as the C Standard Library was created. And it may also
complete _successfully_ upon writing _less_ than requested.
S
Alexander Grigoriev
2005-04-13 04:35:57 UTC
Permalink
FILE_FLAG_OVERLAPPED doesn't have anything to do with write ordering or
overlapped sections of file.

If write order is preserved, it is done no matter was the handle opened with
FILE_FLAG_OVERLAPPED or not.
Post by m
For regular IO, WriteFile can complete successfully with less than the
requested number of bytes written.
If the Handle is opened with the FILE_FLAG_OVERLAPPED flag, then the IO
cannon partially complete regardless of whether or not the IO is blocking or
not.
Consider two threads that write to overlapping sections of a file. In order
to ensure sequential consistency, the first IOP MUST complete totally before
the seccond hits the disk.
Post by Slava M. Usov
[...]
Post by Hector Santos
Synchronous 100% meant a blocked call.
This is what I said.
[...]
Post by Hector Santos
That has nothing to do with the fact whether there is a timeout
consideration. It may part of the equation or not.
This is what I said, too.
These two statements contradict with your original statement that
'"synchronous operation" [...] implies block with no timeouts'.
[...]
Post by Hector Santos
For a blocking WriteFile() when *no timeout* is expected or the device
is prepared for *no timeout* behavior, the call is 100% blocked and
there
Post by Slava M. Usov
Post by Hector Santos
- Success where request = written
- Error
Anything else is a unexpected design framework.
'request = written' can be binary if only one byte is involved. If there
are
Post by Slava M. Usov
more data, it depends on the medium. For serial ports and sockets, to name
just two, it can be less than the entire data size. This is not Windows
specific.
[...]
Post by Hector Santos
I don't know how much simple it can get or why this is even a question.
It is not. The concept is much older than WriteFile(). Have a look at
_write(). As far as I can tell, it has not changed much since 1972 when
what
Post by Slava M. Usov
later became known as the C Standard Library was created. And it may also
complete _successfully_ upon writing _less_ than requested.
S
Slava M. Usov
2005-04-13 13:47:35 UTC
Permalink
Post by m
For regular IO, WriteFile can complete successfully with less than the
requested number of bytes written.
True.
Post by m
If the Handle is opened with the FILE_FLAG_OVERLAPPED flag, then the IO
cannon partially complete regardless of whether or not the IO is blocking
or not.
Not true.

It is exactly the same in both cases; in both cases, it depends on the
medium [read: driver stack].

S
m
2005-04-13 22:05:27 UTC
Permalink
Firstly, I concede that the exact behaviour depends on the type of device
you are accessing (socket versus file etc.).



My point, perhaps poorly articulated, was that in order for scatter / gather
IO or IOCP to work correctly, each write must be sequentially consistent.
If this constancy is not guaranteed, then there is no advantage whatsoever
to using either of these techniques (since you would have to prevent
concurrent overlapping writes and this becomes ebullient to sync IO except
in special cases). WriteFile etc. can return less than the requested number
of bytes as sent successfully and still guarantee sequential consistency
(i.e. shutdown was called on the remote host).



For NTFS disks and TCP sockets, making the handle overlapped allows the
following: (I don't know anything about serial ports)

1) Reads and writes are sequentially consistent and;

2) Multiple threads can call ReadFile, WriteFile, and CloseHandle without
any synchronization on a single handle.



If you still disagree, then I don't know how better to convince you except
to say take 5 minutes and write a test app.
Post by Slava M. Usov
Post by m
For regular IO, WriteFile can complete successfully with less than the
requested number of bytes written.
True.
Post by m
If the Handle is opened with the FILE_FLAG_OVERLAPPED flag, then the IO
cannon partially complete regardless of whether or not the IO is blocking
or not.
Not true.
It is exactly the same in both cases; in both cases, it depends on the
medium [read: driver stack].
S
Slava M. Usov
2005-04-13 23:17:34 UTC
Permalink
"m" <***@online.nospam> wrote in message news:***@tk2msftngp13.phx.gbl...

[...]
Post by m
For NTFS disks and TCP sockets, making the handle overlapped allows the
following: (I don't know anything about serial ports)
1) Reads and writes are sequentially consistent and;
I do not understand what "sequentially consistent" means, not do I
understand what it has to do with the "successful partial write" problem
that we seem to be discussing. But I would like to point out that with
overlapped file IO, "sequentially" does not mean much, simply because the
file pointer is not maintained; each IO request happens at a location that
is described with the IO request. If two simultaneous IO requests happen to
share some location, then the resultant contents at that location is
unpredictable. With sockets, yes, IO does not overlap, but the resultant
order of chunks is still unpredictable.

[...]
Post by m
If you still disagree
Do I? I was disagreeing with something else, e.g.: "If the Handle is opened
with the FILE_FLAG_OVERLAPPED flag, then the IO cannon partially complete
regardless of whether or not the IO is blocking or not."

S
m
2005-04-14 03:18:54 UTC
Permalink
As I said, perhaps too subtly, I did not actually type what I meant.



Sequentially consistent operations are operations where each singe operation
must complete before the next can proceed. This is either ensured by a
software construct, or by the actual underling hardware.



In the case of two IOPs that write to overlapping regions of a file; the
fact that you cannot know which request actually occurred first does not
negate the fact that one of them did occur first, and that other second.



If these IOPs were not sequentially consistent, then you could crash the
whole OS just by doing this (corrupt the FS cache etc.)



Pavel is right, you do need _some_ synchronization between threads, but it
can be as simple as a bExit flag and a while(!bExit) loop. This would
prevent use of the handle after it is closed.



Of course this is really an absurd, since it is quite useless to issue IOPs
that you cannot know the result of, most sane applications are designed not
to do this kind of thing.



BTW: Overlapped TCP sockets work just fine with multiple threads sending.
But this is just as useless except in some special cases (like multiplexing
/ demultiplexing apps).
Post by Slava M. Usov
[...]
Post by m
For NTFS disks and TCP sockets, making the handle overlapped allows the
following: (I don't know anything about serial ports)
1) Reads and writes are sequentially consistent and;
I do not understand what "sequentially consistent" means, not do I
understand what it has to do with the "successful partial write" problem
that we seem to be discussing. But I would like to point out that with
overlapped file IO, "sequentially" does not mean much, simply because the
file pointer is not maintained; each IO request happens at a location that
is described with the IO request. If two simultaneous IO requests happen to
share some location, then the resultant contents at that location is
unpredictable. With sockets, yes, IO does not overlap, but the resultant
order of chunks is still unpredictable.
[...]
Post by m
If you still disagree
Do I? I was disagreeing with something else, e.g.: "If the Handle is opened
with the FILE_FLAG_OVERLAPPED flag, then the IO cannon partially complete
regardless of whether or not the IO is blocking or not."
S
Slava M. Usov
2005-04-14 11:40:14 UTC
Permalink
"m" <***@online.nospam> wrote in message news:***@TK2MSFTNGP09.phx.gbl...

[...]
Post by m
In the case of two IOPs that write to overlapping regions of a file; the
fact that you cannot know which request actually occurred first does not
negate the fact that one of them did occur first, and that other second.
This is absolutely not true. The disk class driver will simply forward all
the requests to the hardware driver, chopping them down as necessary and
submitting them simultaneously. The latter will then send them to the
hardware device. Contemporary SCSI drives may reorder individual sector
access, so they can become mixed, and the end result may be different from
both "IO 1 first, IO 2 second" and "IO 2 first, IO 1 second". When you add
hardware and software RAID levels, that becomes even less predictable.
Post by m
If these IOPs were not sequentially consistent, then you could crash the
whole OS just by doing this (corrupt the FS cache etc.)
FS does not do that for its metadata -- or when it does, it knows what it is
doing. That does not apply to "data".

[...]
Post by m
Of course this is really an absurd, since it is quite useless to issue
IOPs that you cannot know the result of, most sane applications are
designed not to do this kind of thing.
Correct.
Post by m
BTW: Overlapped TCP sockets work just fine with multiple threads sending.
Except that the order of chunks is not guaranteed. It may never become
apparent with commodity hardware under light load, but this will bite you
immediately when you have server-grade hardware and sustained heavy load.
Ditto for file access.

S
m
2005-04-14 15:20:25 UTC
Permalink
"> Except that the order of chunks is not guaranteed. It may never become
Post by Slava M. Usov
apparent with commodity hardware under light load, but this will bite you
immediately when you have server-grade hardware and sustained heavy load.
Ditto for file access.
That is quite interesting since, as I write, we have several servers
(Windows Server 2003 Quad Xenon HT with U320 SCSI RAID and link aggregated
gigabit Ethernet) that flat-line at 100% CPU usage and have disk lights that
never go off during the day (every day). There has NEVER been any problem
with sequential consistency.



Obviously I don't know anything about server-grade hardware .



There is actually some synchronization in these apps, but that is there so
that we can know what we are doing; not to make it work.



I can say for certain:

1) No WriteFile or WSASend call has ever returned less then the
expected number of bytes and reported success in any of these apps. I do
check for this and raise an error - there would be corrupted data in this
case.

2) It is a practical abortion scheme to call CloseHandle or closesocket
from one thread while other threads are calling ReadFile / WSARecv or
WriteFile / WSASend. If the handles are not opened as overlapped, then this
doesn't work because threads can get 'stuck' inside one of these functions.
Overlapped handles always work correctly for me.

3) WSASend can be called on a single socket from multiple threads
without any specific synchronization. If you need to send multiple blocks
contiguously, then you need a critical section etc.



You could say that I am just lucky that everything has worked so far - I
wouldn't.
Slava M. Usov
2005-04-14 15:58:49 UTC
Permalink
"m" <***@online.nospam> wrote in message news:***@TK2MSFTNGP15.phx.gbl...

[...]
Post by m
Obviously I don't know anything about server-grade hardware .
What exactly are you arguing with?
Post by m
There is actually some synchronization in these apps, but that is there so
that we can know what we are doing; not to make it work.
So, do you have multiple outstanding sends on a socket, each part of one
message? Multiple outstanding writes on a file, multi-sector, at overlapping
locations?
Post by m
1) No WriteFile or WSASend call has ever returned less then the
expected number of bytes and reported success in any of these apps. I do
check for this and raise an error - there would be corrupted data in this
case.
And your point exactly is? "WriteFile() never behaves that way"? Then it is
false, because it does. If it is something else, then it is beside the
point.
Post by m
2) It is a practical abortion scheme to call CloseHandle or
closesocket from one thread while other threads are calling ReadFile /
WSARecv or WriteFile / WSASend. If the handles are not opened as
overlapped, then this doesn't work because threads can get 'stuck' inside
one of these functions. Overlapped handles always work correctly for me.
What does this have to do with the discussion so far?
Post by m
3) WSASend can be called on a single socket from multiple threads
without any specific synchronization.
As if it had ever been questioned.
Post by m
If you need to send multiple blocks contiguously, then you need a critical
section etc.
This merely re-states what I wrote in the previous message. So, what are you
arguing with?

S
m
2005-04-14 17:41:44 UTC
Permalink
As an End of Thread statement:

To the OP,

In general, don't assume that the number of bytes written is the same as the
number of bytes requested.

If you know EXACTLY what you are doing, then other considerations may apply.



Slava,

I think that we may have been talking across purposes.
Post by Slava M. Usov
[...]
Post by m
Obviously I don't know anything about server-grade hardware .
What exactly are you arguing with?
Post by m
There is actually some synchronization in these apps, but that is there so
that we can know what we are doing; not to make it work.
So, do you have multiple outstanding sends on a socket, each part of one
message? Multiple outstanding writes on a file, multi-sector, at overlapping
locations?
Post by m
1) No WriteFile or WSASend call has ever returned less then the
expected number of bytes and reported success in any of these apps. I do
check for this and raise an error - there would be corrupted data in this
case.
And your point exactly is? "WriteFile() never behaves that way"? Then it is
false, because it does. If it is something else, then it is beside the
point.
Post by m
2) It is a practical abortion scheme to call CloseHandle or
closesocket from one thread while other threads are calling ReadFile /
WSARecv or WriteFile / WSASend. If the handles are not opened as
overlapped, then this doesn't work because threads can get 'stuck' inside
one of these functions. Overlapped handles always work correctly for me.
What does this have to do with the discussion so far?
Post by m
3) WSASend can be called on a single socket from multiple threads
without any specific synchronization.
As if it had ever been questioned.
Post by m
If you need to send multiple blocks contiguously, then you need a critical
section etc.
This merely re-states what I wrote in the previous message. So, what are you
arguing with?
S
Frank A. Uepping
2005-04-15 20:47:57 UTC
Permalink
Post by m
To the OP,
In general, don't assume that the number of bytes written is the same as the
number of bytes requested.
Right.
Post by m
If you know EXACTLY what you are doing, then other considerations may apply.
Thank you all for discussing this, it was very detailed.

Thanks
FAU

Hector Santos
2005-04-13 23:25:47 UTC
Permalink
Post by m
My point, perhaps poorly articulated, was that in order for scatter / gather
IO or IOCP to work correctly, each write must be sequentially consistent.
If this constancy is not guaranteed, then there is no advantage whatsoever
to using either of these techniques (since you would have to prevent
concurrent overlapping writes and this becomes ebullient to sync IO except
in special cases). WriteFile etc. can return less than the requested number
of bytes as sent successfully and still guarantee sequential consistency
(i.e. shutdown was called on the remote host).
Good point.
Post by m
For NTFS disks and TCP sockets, making the handle overlapped allows the
following: (I don't know anything about serial ports)
Same issues of I/O design consistency applies to serial ports too.

The key difference with SERIAL vs SOCKETS from a communications standpoint:

- Sockets has built-in error correction.
- Sockets has built-in "flow control"

For serial, you have to program it or allow prepare the modem to do it for
you.

Both in either case, if I say write X bytes than it better complete it or
error out.

If you program timeouts into it, then you know whats going on and design
accordingly. But if you don't expect timeouts, it better complete or error
out.
Post by m
If you still disagree, then I don't know how better to convince you except
to say take 5 minutes and write a test app.
He doesn't have too. Everyone has hundreds of applications on Windows that
will break if the device drivers suddenly started to timeout without error
reasons :-)

Can you imagine the terrible consequentials if device driver developers
started to do design things with a mentality such as:

"I know the application layer turned off my timeouts, but thats
ridiculous!
I don't think anyone should write for over 1 hour, therefore I will
timeout
without error. Hopefully, the upper layer will understand why and try
again later!"

or

"Oh gee, that stupid network layer is acting really slow. I think I will
wait 10 more minutes to see if it picks up again and timeout with no
error. Hmmm, wait oh gosh! I don't have any timeout capabilities
documented in my device driver product. Oh hell, I'll change it and
let the application worry about it later! We'll do this under the
name of "security" to justify the change!"

Oh gosh [Body shaking] What a nightmare! <g>

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Pavel Lebedinsky
2005-04-14 02:12:13 UTC
Permalink
Post by m
For NTFS disks and TCP sockets, making the handle overlapped allows the
following: (I don't know anything about serial ports)
1) Reads and writes are sequentially consistent and;
2) Multiple threads can call ReadFile, WriteFile, and CloseHandle without
any synchronization on a single handle.
If you close a handle it becomes invalid and you can't legally use it
anymore. So you will at least need some kind of synchronization to
avoid this.
Frank A. Uepping
2005-04-11 17:39:08 UTC
Permalink
Post by Chris Burnette
I think one of the problems that I see is that there isn't enough
documentation on the expected behavior of Read/WriteFile. There is no
mention of what success really means under various conditions. It's up to us
to try to figure it out. And then there's always unexpected behaviors due to
hardware malfunctions, etc. Part of this is probably because Read/WriteFile
can be used in so many different ways, fron files, serial ports, pipes, etc.
Heck, the documentation doesn't even mention the various errors that
GetLastError can return. Try looking in the documentation for WriteFile to
see what the return value for GetLastError is when a disk is full. Not even
mentioned.
But, let's apply your argument to ReadFile. Let's assume we're using
ReadFile to read a series of bytes synchronously. According to your
argument, if ReadFile cannot read the requested number of bytes, it should
fail. However, this isn't the case. The documentation for ReadFile states
that when you read to the end of a file, it's normal for ReadFile to return
success with *lpNumRead < nNumToRead. When the end of file is read,
*lpNumRead = 0 and ReadFile returns TRUE.
Here's code taken from MSDN's documentation of ReadFile that demonstrates
// Attempt a synchronous read operation.
bResult = ReadFile(hFile, &inBuffer, nBytesToRead, &nBytesRead, NULL) ;
// Check for end of file.
if (bResult && nBytesRead == 0 )
{
// we're at the end of the file
} So, ReadFile doesn't apply the logic of stating that it's a failure
condition when *lpNumRead < nNumToRead in sync situations. It also uses the
same timeout methodology that WriteFile exhibits when using serial port IO.
The point I was trying to make is that it's probably not a good idea to
assume that *lpNumWritten == nNumToWrite when WriteFile returns TRUE in sync
mode (which was the original question). The fact that the documentation
states that this assumption is false to begin with only reinforces my
argument. Should it be the other way around? A good argument could be made
for that, I agree. But nowhere in the documentation does it state this (in
fact, it states that WriteFile can return TRUE when *lpNumWritten <
nNumToWrite under certain circumstances).
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice for
what it's worth. One basic tenet of programming is to check return values.
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris Burnette
EOIR Technologies
I exactly share your opinion.

Thanks
FAU
Hector Santos
2005-04-11 18:21:20 UTC
Permalink
Post by Frank A. Uepping
Post by Chris Burnette
I didn't have anything to do with the design and implementation of
WriteFile... I'm only writing based on my experience, so take my advice for
what it's worth. One basic tenet of programming is to check return values.
These return values are in there for a reason, and they should be checked
accordingly. Making assumptions about what the expected behavior is when
that behavior is not clearly spelled out in the documentation isn't a good
idea.
Chris Burnette
EOIR Technologies
I exactly share your opinion.
If you share this opinion, then you feel that regardless of synchronous
expectations that you MUST NEVER use a single call to WriteFile because you
can't trust it anymore and it must ALWAYS be replaced with a flushing
concept:

BOOL SyncWriteFile(
HANDLE hFile, // handle to file
LPCVOID lpBuffer, // data buffer
DWORD nAmount) // number of bytes to write

{
DWORD nTotal = 0;
while (nTotal < nAmount) {
DWORD w = 0;
if (!WriteFile(hFile,lpBuffer+nTotal,nAmount-nTotal,&w,NULL)) {
return FALSE;
}
nTotal += w;
}
return TRUE;
}

In short, what you are just done is made the WriteFile() obselete because
what you are saying you can't TRUST the system.

Unrealistic in the real world of File I/O and communications design.

Lets assume the above is what is required for every single now obselete
WriteFile() function call. What if there is a some latency in the writing?
What if there is a normal delay in the writing to a device that is
continuously returns a non-error timeout? What does that mean under a
proper redesign consideration.

It now means you need to add some sanity checking in the SyncWriteFile()
call:

BOOL SyncWriteFile(
HANDLE hFile, // handle to file
LPCVOID lpBuffer, // data buffer
DWORD nAmount, // number of bytes to write
DWORD nWaitTimeMsecs )

{
DWORD nTotal = 0;
while (nTotal < nAmount) {
DWORD w = 0;
if (!WriteFile(hFile,lpBuffer+nTotal,nAmount-nTotal,&w,NULL)) {
return FALSE;
}
nTotal += w;
Sleep(100);
nWaitTimeMSecs -= 100;
if (nWaitTimeMSecs <= 0) {
SetLastError(??? WHAT EVER DO YOU USE???);
return FALSE;
}
}
return TRUE;
}

So what ERROR do you use when the sanity check returns FALSE?

ERROR_SERVICE_REQUEST_TIMEOUT?

Or some other error?

But look at what this basically boils down to,. You don't need the wrapper
function!

if (WriteFile(hFile,lpBuffer,nAmount,&w,NULL) && nAmount != w) {
// unexpected result
SetLastError(??? WHAT EVER DO YOU USE???);
return FALSE;
}

The point is, while it is fantastic to secure the robustness of the software
by adding complete error handling logic, there comes a point where logic and
common sense software engineering is a important consideration in the
design.

In this case, where a block WriteFile() is expected, it would be CRITICAL
SYSTEM ERROR to have it behave in a different way that requires a very
significant followup up based on how to deal with it from a user and
customer standpoint.

Again, you said "Synchronous Operation" that implies BLOCK with no
TIMEOUTS.

Of its something else like async or timeouts are expected, thats a different
issue. But then again, that is not a 100% blocked "Synchronous Operation."


--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Terry
2005-04-13 00:36:46 UTC
Permalink
The PSDK says that WriteFile() is the Win32 equivalent of fwrite(), and
about fwrite() it says: "fwrite returns the number of full items actually
written, which may be less than count if an error occurs.". This implies
that if no error occurs, all items will have been written.

So, if the semantics of fwrite() are applied to WriteFile(), then if
WriteFile() does not report an error, all data was written.
Post by Frank A. Uepping
Hello,
Can we always assume that when WriteFile() (in synchronous operation
mode) returns successfully it has written the requested number of bytes,
i.e. nNumberOfBytesToWrite == *lpNumberOfBytesWritten?
Or is it legal for WriteFile() to return successfully without having
written all requested bytes, i.e. nNumberOfBytesToWrite >
*lpNumberOfBytesWritten?
(I assume the latter, otherwise I see no sense for having
lpNumberOfBytesWritten.)
Thanks
FAU
David Craig
2005-04-13 01:51:11 UTC
Permalink
Where does MSDN Library equate fwrite() with WriteFile()? The documentation
specifically mentions that the number of bytes written can be less than the
amount requested under certain circumstances with the return being non-zero
(TRUE). This has been the case for many years.

In any case, I don't see any reason to expect the semantics to transfer from
WriteFile() to fwrite().
Post by Terry
The PSDK says that WriteFile() is the Win32 equivalent of fwrite(), and
about fwrite() it says: "fwrite returns the number of full items actually
written, which may be less than count if an error occurs.". This implies
that if no error occurs, all items will have been written.
So, if the semantics of fwrite() are applied to WriteFile(), then if
WriteFile() does not report an error, all data was written.
Post by Frank A. Uepping
Hello,
Can we always assume that when WriteFile() (in synchronous operation
mode) returns successfully it has written the requested number of bytes,
i.e. nNumberOfBytesToWrite == *lpNumberOfBytesWritten?
Or is it legal for WriteFile() to return successfully without having
written all requested bytes, i.e. nNumberOfBytesToWrite >
*lpNumberOfBytesWritten?
(I assume the latter, otherwise I see no sense for having
lpNumberOfBytesWritten.)
Thanks
FAU
Terry
2005-04-13 20:03:21 UTC
Permalink
Post by David Craig
Where does MSDN Library equate fwrite() with WriteFile()?
Search MSDN for "WriteFile fwrite", the article "Win32 Equivalents for C
Run-Time Functions".
Post by David Craig
The documentation
specifically mentions that the number of bytes written can be less than the
amount requested under certain circumstances with the return being non-zero
(TRUE). This has been the case for many years.
The question is not whether WriteFile() may or may not return less than the
amount questioned.

The question is: if WriteFile() returns no-error, was all of the
requested-data written? Do we need to check that "nNumberOfBytesToWrite ==
*lpNumberOfBytesWritten"? If WriteFile() returns error, we know we need to
check those numbers -- the question is, do we need to check the numbers if
WriteFile() returns success?
Post by David Craig
In any case, I don't see any reason to expect the semantics to transfer from
WriteFile() to fwrite().
As has been pointed out, a lot of software would break (as described by
Hector Santos elsewhere in this thread) if this semantic was not
implemented. The confusion has arisen because this behavior is not
explicitly stated in the WriteFile() doc -- thus the OP's question.

Because the doc on fwrite() says that the amount-written may be less "if an
error occured", this implies that if an error did not-occur, the
amount-written will not be less than requested.

So, by linking the fwrite() doc with the "Win32 Equivalents for C Run-Time
Functions" doc, if WriteFile() is truly supposed by be the Win32 equivalent
to fwrite(), then the behavior expected and observed is actually documented.

If this is not true, then there needs to be an update to the Win32
Equivalents-doc that puts an asterisk next to it, like it has for
SetFilePointer(), noting where behavior deviates from fwrite().
Post by David Craig
Post by Terry
The PSDK says that WriteFile() is the Win32 equivalent of fwrite(), and
about fwrite() it says: "fwrite returns the number of full items actually
written, which may be less than count if an error occurs.". This implies
that if no error occurs, all items will have been written.
So, if the semantics of fwrite() are applied to WriteFile(), then if
WriteFile() does not report an error, all data was written.
Post by Frank A. Uepping
Hello,
Can we always assume that when WriteFile() (in synchronous operation
mode) returns successfully it has written the requested number of bytes,
i.e. nNumberOfBytesToWrite == *lpNumberOfBytesWritten?
Or is it legal for WriteFile() to return successfully without having
written all requested bytes, i.e. nNumberOfBytesToWrite >
*lpNumberOfBytesWritten?
(I assume the latter, otherwise I see no sense for having
lpNumberOfBytesWritten.)
Thanks
FAU
Slava M. Usov
2005-04-13 20:38:18 UTC
Permalink
"Terry" <***@homes4sale.us> wrote in message news:***@TK2MSFTNGP12.phx.gbl...

[...]
Post by Terry
The question is: if WriteFile() returns no-error, was all of the
requested-data written?
Not necessarily.
Post by Terry
Do we need to check that "nNumberOfBytesToWrite ==
*lpNumberOfBytesWritten"?
Yes.
Post by Terry
If WriteFile() returns error, we know we need to check those numbers
-- the question is, do we need to check the numbers if WriteFile() returns
success?

It is exactly the other way around. If it returns error, then you needn't
check the numbers -- you know it failed, so it cannot have written all the
data. The numbers need only be checked when it succeeds.
Post by Terry
As has been pointed out, a lot of software would break (as described by
Hector Santos elsewhere in this thread) if this semantic was not
implemented.
Not necessarily. WriteFile() may or may not write less than requested. That
mostly depends on what the application does. When you write files, it is
extremely unlikely it will write less and return success.

[...]
Post by Terry
Because the doc on fwrite() says that the amount-written may be less "if
an error occured", this implies that if an error did not-occur, the
amount-written will not be less than requested.
fwrite() is irrelevant. Read the docs on WriteFile() if you want to know
what WriteFile() does. If you're looking for analogies, then look at
_write(). _write(), unlike fwrite(), deals with low-level IO, same as
WriteFile().

S
Hector Santos
2005-04-13 22:49:10 UTC
Permalink
I am trying to figure why it is difficult to understand the main point.
Are you an software engineer? applications developer? systems developers?
driver developer? all of the above?

That is not to demean you, not at all. But I have to approach each one
differently and seems to me that some very fundamental engineering concepts
is not being understood.

If you design a I/O application with the concrete design criteria that there
be no timeout and no overlapped I/O, then the I/O is basically working in
synchronous blocked mode.

Under sync, no timeout whether that means the device is known to have no
timeouts or you programmatically set the device timeouts values using some
known function (i.e, SetCommTimeouts), in this case, WriteFile() will block
and 100% of the time return:

- TRUE with request == written, or
- FALSE with extended error

There is NO other expectation.

What you seems to be suggesting that this basic fundamental idea for WIN32
is erroneous because indeed it is very possible to return TRUE with written
< request regardless of the device and/or its device capabilities and/or
programmable device attributes.

If this is true, then you need to come to understand and realize the extreme
ramifications this will have in current Win32 I/O software design working
under these basic fundamental premises. It is fundamentally important to
understand this point.

Now, by no means, this does not suggest that you don't need to check the
values when WriteFile returns TRUE. Absolutely not.

What I am saying is that if this is the case, where you have:

// Sync, No Timeout expected
if (!WriteFile(h,buffer, request, &written, NULL) && request &&
written)) {
ReportCrtiicalError("Fail to write requested amount",
GetLastError());
return FALSE;
}

then it represents a serious design flaw and/or hardware operation on the
machine (device) that will require a different course of software
engineering actions.

It will introduce new design consideration:

1) Replace all WriteFile() calls with a wrapper WritFileFlush() function
2) What are the retry limitations?
3) How do you find out what is the real error?
4) What final actions do you do if you can't complete the request?

Lets assume that this is ok, and you indeed do all the above, there two
ultimate results:

1) Success
2) Error

Success means that the retries and flushing eventually succceed, the device
finally "caught up" with whatever was delaying the writes.

Error means that after some unknown time limit or retry, the request was not
completed. In this case, all you can do is give the user some form of a
"critical error" action. This is great. No problem. It should be like
that.

Well, if this is the design possibilities, why not just completely redesign
the entire thing under an asynchronous, overlapped I/O framework? Why
bother with non-async WriteFiles at all? In effect, what you are saying is
that sync WriteFile() is obselete.

The fact is that asynchronous WriteFile() operatons is not required for all
designs or application needs. Any short of completing the request is an
error.

--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Slava M. Usov
[...]
Post by Terry
The question is: if WriteFile() returns no-error, was all of the
requested-data written?
Not necessarily.
Post by Terry
Do we need to check that "nNumberOfBytesToWrite ==
*lpNumberOfBytesWritten"?
Yes.
Post by Terry
If WriteFile() returns error, we know we need to check those numbers
-- the question is, do we need to check the numbers if WriteFile() returns
success?
It is exactly the other way around. If it returns error, then you needn't
check the numbers -- you know it failed, so it cannot have written all the
data. The numbers need only be checked when it succeeds.
Post by Terry
As has been pointed out, a lot of software would break (as described by
Hector Santos elsewhere in this thread) if this semantic was not
implemented.
Not necessarily. WriteFile() may or may not write less than requested. That
mostly depends on what the application does. When you write files, it is
extremely unlikely it will write less and return success.
[...]
Post by Terry
Because the doc on fwrite() says that the amount-written may be less "if
an error occured", this implies that if an error did not-occur, the
amount-written will not be less than requested.
fwrite() is irrelevant. Read the docs on WriteFile() if you want to know
what WriteFile() does. If you're looking for analogies, then look at
_write(). _write(), unlike fwrite(), deals with low-level IO, same as
WriteFile().
S
Slava M. Usov
2005-04-13 23:17:20 UTC
Permalink
"Hector Santos" <***@nospamhere.com> wrote in message news:***@TK2MSFTNGP09.phx.gbl...

[...]
Post by Hector Santos
What you seems to be suggesting that this basic fundamental idea for WIN32
is erroneous because indeed it is very possible to return TRUE with
written < request regardless of the device and/or its device capabilities
and/or programmable device attributes.
Not regardless. It depends on all the things that you enumerated above. But,
if we consider WriteFile() "generically", then it is possible. But then it
is not something special or win32 specific -- any other generic IO routine
on any other platform behaves similarly, as I mentioned in the _write()
example.

[...]
Post by Hector Santos
1) Replace all WriteFile() calls with a wrapper WritFileFlush() function
If you're dealing with sockets, yes. If you're dealing with files, no. If
you're dealing with serial ports and finite timeouts, probably. There is no
universal answer, except that "WriteFile() may write less than requested and
still indicate 'success'".


[...]
Post by Hector Santos
1) Success
2) Error
These are subjective. At any rate, they are what the medium/drivers think
they are.

I should say again that your definition of "synchronous" is not win32's
definition of "synchronous".

S
David Craig
2006-06-14 04:13:12 UTC
Permalink
If you say write 64KB to a file located on a floppy disk and there are only
enough clusters to hold 32KB, what should happen? The current rules are
that written will be less than requested, but no error occured because what
could be written was written without error. Character devices such as the
serial port are somewhat different. If the total fails to write it has to
be because of an error. Serial cannot fail without it being an error.

If you are writting to a floppy in direct access mode by using CHS and you
pick a non-existant CHS combo, then it is an error and should not return
success. File systems are a little different though. They can only tell
you about the media being full, but is that really an error? I guess
Microsoft decided it was not when they wrote the specs. This has been true
for at least 8 years.
Post by Hector Santos
I am trying to figure why it is difficult to understand the main point.
Are you an software engineer? applications developer? systems developers?
driver developer? all of the above?
That is not to demean you, not at all. But I have to approach each one
differently and seems to me that some very fundamental engineering concepts
is not being understood.
If you design a I/O application with the concrete design criteria that there
be no timeout and no overlapped I/O, then the I/O is basically working in
synchronous blocked mode.
Under sync, no timeout whether that means the device is known to have no
timeouts or you programmatically set the device timeouts values using some
known function (i.e, SetCommTimeouts), in this case, WriteFile() will block
- TRUE with request == written, or
- FALSE with extended error
There is NO other expectation.
What you seems to be suggesting that this basic fundamental idea for WIN32
is erroneous because indeed it is very possible to return TRUE with written
< request regardless of the device and/or its device capabilities and/or
programmable device attributes.
If this is true, then you need to come to understand and realize the extreme
ramifications this will have in current Win32 I/O software design working
under these basic fundamental premises. It is fundamentally important to
understand this point.
Now, by no means, this does not suggest that you don't need to check the
values when WriteFile returns TRUE. Absolutely not.
// Sync, No Timeout expected
if (!WriteFile(h,buffer, request, &written, NULL) && request &&
written)) {
ReportCrtiicalError("Fail to write requested amount",
GetLastError());
return FALSE;
}
then it represents a serious design flaw and/or hardware operation on the
machine (device) that will require a different course of software
engineering actions.
1) Replace all WriteFile() calls with a wrapper WritFileFlush() function
2) What are the retry limitations?
3) How do you find out what is the real error?
4) What final actions do you do if you can't complete the request?
Lets assume that this is ok, and you indeed do all the above, there two
1) Success
2) Error
Success means that the retries and flushing eventually succceed, the device
finally "caught up" with whatever was delaying the writes.
Error means that after some unknown time limit or retry, the request was not
completed. In this case, all you can do is give the user some form of a
"critical error" action. This is great. No problem. It should be like
that.
Well, if this is the design possibilities, why not just completely redesign
the entire thing under an asynchronous, overlapped I/O framework? Why
bother with non-async WriteFiles at all? In effect, what you are saying is
that sync WriteFile() is obselete.
The fact is that asynchronous WriteFile() operatons is not required for all
designs or application needs. Any short of completing the request is an
error.
--
Hector Santos, Santronics Software, Inc.
http://www.santronics.com
Post by Slava M. Usov
[...]
Post by Terry
The question is: if WriteFile() returns no-error, was all of the
requested-data written?
Not necessarily.
Post by Terry
Do we need to check that "nNumberOfBytesToWrite ==
*lpNumberOfBytesWritten"?
Yes.
Post by Terry
If WriteFile() returns error, we know we need to check those numbers
-- the question is, do we need to check the numbers if WriteFile()
returns
Post by Slava M. Usov
success?
It is exactly the other way around. If it returns error, then you needn't
check the numbers -- you know it failed, so it cannot have written all the
data. The numbers need only be checked when it succeeds.
Post by Terry
As has been pointed out, a lot of software would break (as described by
Hector Santos elsewhere in this thread) if this semantic was not
implemented.
Not necessarily. WriteFile() may or may not write less than requested.
That
Post by Slava M. Usov
mostly depends on what the application does. When you write files, it is
extremely unlikely it will write less and return success.
[...]
Post by Terry
Because the doc on fwrite() says that the amount-written may be less "if
an error occured", this implies that if an error did not-occur, the
amount-written will not be less than requested.
fwrite() is irrelevant. Read the docs on WriteFile() if you want to know
what WriteFile() does. If you're looking for analogies, then look at
_write(). _write(), unlike fwrite(), deals with low-level IO, same as
WriteFile().
S
David Craig
2005-04-13 23:22:22 UTC
Permalink
Several problems with that article. Did you see it applies to nothing after
95 and NT 3.5? Somewhat misleading I will agree. Also note the following
statement in the WriteFile() documentation: When writing to a nonblocking,
byte-mode pipe handle with insufficient buffer space, WriteFile returns TRUE
with *lpNumberOfBytesWritten < nNumberOfBytesToWrite.

Returning non-zero means there was no error. My statement gave both parts.
Yes, the fwrite() docs do say it will return an error if all could not be
written. Have you looked into the CRT and seen how fwrite() is implemented?
It looks interesting and it does check for the amount written being less
than the amount requested when TRUE is being returned from WriteFile().
Post by Terry
Post by David Craig
Where does MSDN Library equate fwrite() with WriteFile()?
Search MSDN for "WriteFile fwrite", the article "Win32 Equivalents for C
Run-Time Functions".
Post by David Craig
The documentation
specifically mentions that the number of bytes written can be less than
the
Post by David Craig
amount requested under certain circumstances with the return being
non-zero
Post by David Craig
(TRUE). This has been the case for many years.
The question is not whether WriteFile() may or may not return less than the
amount questioned.
The question is: if WriteFile() returns no-error, was all of the
requested-data written? Do we need to check that "nNumberOfBytesToWrite ==
*lpNumberOfBytesWritten"? If WriteFile() returns error, we know we need to
check those numbers -- the question is, do we need to check the numbers if
WriteFile() returns success?
Post by David Craig
In any case, I don't see any reason to expect the semantics to transfer
from
Post by David Craig
WriteFile() to fwrite().
As has been pointed out, a lot of software would break (as described by
Hector Santos elsewhere in this thread) if this semantic was not
implemented. The confusion has arisen because this behavior is not
explicitly stated in the WriteFile() doc -- thus the OP's question.
Because the doc on fwrite() says that the amount-written may be less "if an
error occured", this implies that if an error did not-occur, the
amount-written will not be less than requested.
So, by linking the fwrite() doc with the "Win32 Equivalents for C Run-Time
Functions" doc, if WriteFile() is truly supposed by be the Win32 equivalent
to fwrite(), then the behavior expected and observed is actually documented.
If this is not true, then there needs to be an update to the Win32
Equivalents-doc that puts an asterisk next to it, like it has for
SetFilePointer(), noting where behavior deviates from fwrite().
Post by David Craig
Post by Terry
The PSDK says that WriteFile() is the Win32 equivalent of fwrite(), and
about fwrite() it says: "fwrite returns the number of full items
actually
Post by David Craig
Post by Terry
written, which may be less than count if an error occurs.". This
implies
Post by David Craig
Post by Terry
that if no error occurs, all items will have been written.
So, if the semantics of fwrite() are applied to WriteFile(), then if
WriteFile() does not report an error, all data was written.
Post by Frank A. Uepping
Hello,
Can we always assume that when WriteFile() (in synchronous operation
mode) returns successfully it has written the requested number of
bytes,
Post by David Craig
Post by Terry
Post by Frank A. Uepping
i.e. nNumberOfBytesToWrite == *lpNumberOfBytesWritten?
Or is it legal for WriteFile() to return successfully without having
written all requested bytes, i.e. nNumberOfBytesToWrite >
*lpNumberOfBytesWritten?
(I assume the latter, otherwise I see no sense for having
lpNumberOfBytesWritten.)
Thanks
FAU
Loading...