-----------------------------------------------------------------
NOV-BAK3.DOC -- 19970222 -- Email thread on NetWare Backup issues
-----------------------------------------------------------------

	Feel free to add or edit this document and then email
	it back to faq@jelyon.com




Date: Tue, 1 Oct 1996 13:31:42 -0400
From: Chris Brown <CBROWN@SEITZ.COM>
Subject: Re: 4 mm DAT Tape length

>Recently there had been some discussions about backup tape unit.
>
>What I like to know is the length of 4 mm DAT tape that one should be
>using.
>
>From what I have been told is that use only 90 m tape as they are
>more reliable than 120 m tape because the 120 m tape is "thinner" and
>can break easily.
>
>So I have been using 90 m tape but on my 8 GB tape drive, this mean
>that I can only store 4 GB on the 90 m tape. I will be having 6 GB
>storage soon and this would mean that by using the 90 m tape, I will
>need 2 tapes and there would not be anyone to change the tape during
>the backup window.
>
>Does anyone have any comment on the 90 m and 120 m tape length.

Here are several posts from another list, and I can say first hand
that a lot of this stuff is correct.

Date: Wed, 4 Sep 1996 11:31:56 -0400

I've read with great interest the issues everyone has stated with
backups. I've got my own story to tell about HP-DAT and 120m tapes.

I've had the best luck with 90m tapes, TDK, HP, Sony, and Fuji all
seem about the same. As for the 120m tapes its almost hopeless. I had
36 HP 120m tapes that all failed within a 6 month period. I returned
them to HP for warentee replacement.

Seems there is a SERIOUS "dirt" and contamination problem with DAT
tapes. And a second problem with self demagnetization.  In the
process of tracking down the problem I had several very long and
informative talks with HP engineers involved with their tape
products. Being an eight year HP vetern myself, I was able to get a
little further than some callers might. Some of the engineers were
very candid and helpful discribing the inherent problems with DAT.

I had engineers ask me my specific application for the DAT tape, they
asked is it for archive or backup? I thought the two were the same!
Some of the engineers said they didn't consider DAT to be viable for
storage over 6 months, and then under critically controled climates
for the "archive" tapes.

As described to me by HP engineers;

PROBLEM #1: Contamination and cross contamination.

The tapes all have some loose magnetic particals on them, these loose
particals and  microscopic peaks of magnetic material on the tape
will build up on the heads. Once the heads are clouded with this
stuff they start reading errors. When the drive has a read error, it
will reverse and retry the spot with the error, up to 60 times.
This has the effect of taking a marginal spot on the tape and wearing
it out quickly, while smearing more dirty particals on it.

Now if you change tapes without cleaning, as I did, to try a new one,
the dirty head will smear particals on the new tape, contaminating it
as well. Once that happens the tape will have loose and smeared
magnetic particals that will contaminate the heads when ever it is
inserted.

The only prevention is very regular cleaning, perhaps as often as
when each different tape is installed. HP engineers reiterated that
you can't clean those drives enough.

The 120m tapes are much more suseptable to this problem than the 90m
tapes because their writing density is significantly higher.

Some of the HP guys said that their tapes are better in this regard
than other manufacture's tapes because all HP tapes are tested and
exercised, which removes a large amount of the peaks and
contamination. I haven't found this to be true. They also told me
that they don't make the tapes themselves, they buy them from one of
the large manufacturers, Sony I think <g>.



PROBLEM #2 Self de-magnetization;

The information on the tapes is stored as very small magnetized
patterns. If you push a bunch of weak magnets very close together,
after time, they  will start to demagnetize each other. This is what
happens with DAT tapes, and it is worse with 120m tapes.  The 120m
tape has a thinner plastic substrate, so the "windes" of tape lay
very close to each other. And the recording density of 120m tapes is
significantly higher than 90m tapes, this aggravates the problem.

Over time the magnetic images on DAT tapes become blured because of
this, and 120m tapes are the worst.

I still have one HP-1533 DAT and some of my customers still have
some, but we sure don't count on reading data from the tapes after a
couple of months. And the 120m tapes have all been religated to short
term storage only (matter of days).  I've switched to recordable CD-
ROMs for all data backups that are important to us and must be
reliabily read more than a few months after wrtiting.
My $0.02,
John@Seitz.com



>On 4 Sep 1996 at 09:17, Milton Shomo
(MSHOMO@SMTP{mshomo@kcf.ishub.com}) wrote:

I've also had extremely good luck with the 3M tapes.  I have found
that Computer Discount Warehouse (800-363-4239) always has them
readily available.


Mike Avery wrote:

As a friend always says, "The wonderful thing about standards is how
many there are to choose from". As far as it goes, I've had very good
luck with the 3M tapes (when I can find them).  I'm delighted that a
90 meter tape will back up my own system for about a week. Good
luck,


PAC@MSK1 wrote:

I thought I was nuts until I read this post.  I cannot get any 120m
DAT tape other that HP tapes to work reliably in my HP DAT drives,
Sony's are unreliable and Maxell's fail every single time.  The
Maxell's are a little better in my Compaq Archive unit, but still are
not as good as the HP's there either.  The Sony 90m tapes seem to be
Ok anywhere however, but the HP's 90m tapes I have had trouble in the
WangDAT 3200 drives I have, along with Inmac and some other  brand I
can't remember offhand.  Damned annoying !  The joys of the effects
of the market place on  standards. Peter


Mike Avery wrote:

A DAT tape that will hold 4 gigs, 8 compressed, will be hard to find
and run about $25 each.  The 60 and 90 meter 4mm dat tapes will hold
1 or 2 gigs uncompressed and usually run $7 to 10. The 120 meter 4mm
tapes are all but unavailable, and they will run closer to $20.


Some people will find Sony DAT 120 meter 4mm DAT tapes and think
their problems are over.  Actually, their problems are just
beginning.  It seems that HP and most other DAT drives have trouble
reading data from a Sony 120 meter 4mm DAT tape.  A backup is only
as good as your last restore.  As to the Sony slamming, I suspect
that if you use a Sony drive, they work fine.  The word I got was
that some of the low-level formatting is different on the Sony
tapes.   <snip>


Mike Avery wrote:

Interesting.

There was an article in Scientific American a few years back about
backups.  Magnetic longevity was one of the matters discussed.  Of
course, all tapes have some tendency to self-demagnetize.  That's
why the video store want's you to "be kind and rewind", and why
recording studios always stored tapes tails out - it guaranteed that
the print through would be less likely and more random.


The Scientific American article said that they expected DAT tapes to
last about a year.  I was concerned since the employer I was with
then had need to 10 year archives.  I wound up calling 3M and
getting the run around for several days.  In the end I talked to an
engineer. Her strong suggestion was that how long a tape will last
depends in large part on how it is stored.  It should be stored
standing on one of it's edges, not lying on the large flat part of
the case.  Their accelerated aging tests suggest that six months to
a year is about as long as they will last lying down, but that 10
years is reasonable if they are standing on edge and are stored at a
reasonable temperature.

---------

Date: Tue, 1 Oct 1996 15:17:30 -0500
From: David Buckwalter <david@ACRC.UAMS.EDU>
Subject: DAT Tapes

>Recently there had been some discussions about backup tape unit.
>What I like to know is the length of 4 mm DAT tape that one should be
>using.
>
>From what I have been told is that use only 90 m tape as they are
>more reliable than 120 m tape because the 120 m tape is "thinner" and
>can break easily.
>
>So I have been using 90 m tape but on my 8 GB tape drive, this mean
>that I can only store 4 GB on the 90 m tape. I will be having 6 GB
>storage soon and this would mean that by using the 90 m tape, I will
>need 2 tapes and there would not be anyone to change the tape during
>the backup window.
>
>Does anyone have any comment on the 90 m and 120 m tape length.

One thing I have noticed about 120m tapes is that the drive has to been
cleaned more often. I switched to a 4/8gb dds-2 drive about a year and a
half ago and began getting numerous tape errors with new 120m tapes. The
old 90m tapes would work fine. so I clean the drive every week before
the full backup(whether it needs it or not) and no problems. I get about
2.6 gb on a 90m tape and 5.6 on a 120m tape.

---------

Date: Tue, 1 Oct 1996 20:10:32 -0400
From: Dan Schwartz <dan@SNIP.NET>
Subject: Archiving & backups

	Just had to stick my 2½ in here about backing up & archiving...
Over on the Mac side we've needed different storage technologies for many
years now, because of the huge graphics files (>100 MB) that we need to
take to service bureaus for imagesetter (high resolution) output to film.
In our case, technologies such as the SyQuest & Bernoulli removeable hard
drives and Magneto-Optical (MO) disks are quite popular; with tape being
mostly unsupported (except for a few DAT's here and there).

	In the last 2 years recordable CD's (CD/R) have exploded on the
scene, with good HP 2X units running under $600. These are excellent for
archiving data for long periods - Like 10 years as one person said his boss
needs. And, being able to record 650 MB of data for about $6.50 can't be
beat.

	CD/R has one additional benefit for disaster recovery: Very strict
standards. An ISO-9660 CD recorded on a Mac will be playable on any Mac or
PC - None of this crap about only being able to peel the data off with the
same deck that shoved it on the tape. This is  *crucial*  because if your
computer WITH IT'S TAPE DRIVE gets damaged (lightning, fire, sprinkler,
theft, etc...) you just may NOT be able to easily restore your data on all
those tapes. The paramount advantage of CD/R is that no matter what happens
to the computer, you can always run down to the local mall  _at_any_hour_
and get SOME kind of PC with a CD player, restore the critical data and be
up & running Monday morning. (I'm kinda stretching it here a bit, but
you'll be surprised at the versatility of CD/R.)

	In addition, CD recording has another nice advantage: Inexpensive
transparent archiving of data for (almost) online retrieval. For example,
one of my clients is a magazine publisher, producing seven monthly
journals. By archiving the back issues onto CD the artists can still
retrive any info - Old ads, scans, etc.- and use it in the current issue
being produced. With 2X SCSI CD players down to $29 just keep hanging them,
7 at a time, off another SCSI card and share them "wide open" (no access
restrictions) across the network. And it works... Very well.

	CD recording has other uses, some of which you wouldn't even dream
of until you actually have one online. Mine has saved my bacon more than
once. For example, say you need to send a large file to a client: Just pop
it on a CD - Even if it's only 10-20 MB or so. Another use: Mirroring your
server boot drive with all it's NLM's & settings. If it becomes corrupt,
just reformat & copy it back from the CD. [Radius did this with their
PowerPC machines: They created a "perfect" machine and then put it's image
on a recovery CD... And it works nicely.]

	These are some tricks you can do with CD/R you can't do with tape.
The only ‡aveats are that you are limited to 650 MB at a time; and that
once you write that's it. You CAN, with multisession recorders, add data..
But you can't take it off.

	Magneto-optical is now getting up to the 4 gig per cart level, but
MO technology is quite susceptible to dust-induced corruption.

	IMHO, leave the tape for the beach and the car, and NOT for the
computer.

---------

Date: Tue, 1 Oct 1996 18:50:18 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: Archiving & backups

[Restatement of above message deleted]
--------------
	By way of contrast, recording a CD requires a machine able to
pump bytes at the time needed, not later, and that means an isolated
machine rather than a busy server. Otherwise one gets data underruns.
	Peeling lables off is the same problem as tapes, except that
tapes are reusable (yes, that can be a problem).
	Making a CD is also limited to CD speeds, so a bunch of bytes is
going to take a corresponding bunch of time, during which some critical
files may not be available.
	CD-ROMs are generally not archival quality either. The clear
plastic covers are not sealed well enough to prevent oxidation and minor
separation.

	(Break, Break). Small addition to the general thread, which was
about DAT tapes. As I mentioned earlier this summer many folks are just
beginning to relearn tape handling methods known since the computer
paleolithic era. My advice was degauss a tape and run it through a
mechanism at least once to clean and burnish it; full tape erasing in
backup programs do both at once. Cleaning tape heads is done far less
often today than when tapes really counted (or should we say when we used
Real Tapes, no pun intended). Tape print-through, as the situation is
known, can occur, and its avoidance technique is just wind and rewind the
tape once a year or so. Six months is hardly the life of a DAT tape, as a
great many sites can verify.
	For the very conservative system managers out there still using
punched paper tape, I hope you've not let those tapes dry out and become
fuzzy. Personally I still have a reel or two, but I did get rid of punched
cards and stacks of printed listings. The ancient Egyptian engineers knew
a thing or two, and I was reading their 5K-year old backups this summer.
They were GUI based too.
	Joe D.

---------

Date: Wed, 2 Oct 1996 09:54:08 -0400
From: Dan Schwartz <dan@SNIP.NET>
Subject: Archiving, part deux

[Responding to Chris & Joe D. above]

	Chris, you are quite correct about the buffer underflow issue. But
it isn't quite as bad now that CD/R manufacturers are putting in decent
sized RAM buffers in their recorders. The days of the 64 kilobyte buffers
in Pinnacle Micro CD/R's is long gone. Surprisingly, the biggest cause of
buffer underflow is the hard drive itself, when it pauses every couple
minutes to go into a T-Cal (thermal recalibration) cycle for 50-100
milliseconds. This "hiccup" is not only devastating to the video boys but
also can also turn a CD into a $7 drink coaster.

	The best type of hard drive to use for CD/R is one that doesn't
pause for T-Cal's: The Quantum Empire and Seagate Barracuda drives embed
servo information in each platter so the heads recalibrate their position
on each rotation. Micropolis drives allow you to postpone T-cal's up to 10
minutes; but you can easily knock the heads so far out that you need to
low-level format the drives (and lose all your data!) to get everything
back running... Just ask any digital video producer about that one! :)

	Fortunately, CD recording software (or at least the
(non-proprietary) Mac CD/R software I have used - Toast and Gear) has two
test modes to verify the speed of the host computer: The first is a 60
second spot check that measures the sustained transfer rate; and the second
actually is a full test run with the laser on low power (read level) if
it's going to be close. Another safety feature is that you can slow down
the recording with many recorders, down to a 2X or 1X rate on 4X recorders.
[Note: each multiple corresponds approximately to the data rate required to
play back a Red Book audio CD: About 150 kilobytes/second.]

	The choice now becomes setting up a separate inexpensive CD
recording station, such as a Quadra 610 or 486/20; or hitching up a CD/R to
the server. With a slow recording rate and an overnight run (when the
server is lightly loaded) then the second method is possible; but with
hardware being so cheap today it may actually be easier (especially for
CD/R neophytes) to just set up a separate recording station.

	Having burned quite a few CD's with Toast, Gear, Pinnacle, and
Kodak software I would recommend, for first-timers, setting up a Macintosh
workstation. You'll get better diagnostics (not that you'll need them :)
and absolutely *no* set-up hassles, because the Macs have built-in SCSI,
and (Centris, Quadra and up) have built-in DMA-based ethernet. When I back
up a PC server (NetWare or NT) I just mount the server drive on the
Desktop, launch Toast, tell it to make an ISO-9660 image, and select (up
to) 650 MB of server files. I get a cup of coffee, come back, hit "Record,"
insert a blank CD, and come back in a half-hour or so to hand the client a
finished archive disk for their server. Client then wipes off 650 MB of
files from their server hard drive and puts CD in their player, changer, or
jukebox. For AppleShare servers I create HFS disks; for NetWare and NT
servers I create ISO-9660 disks -- Gotta keep the customers happy!

	[Another extra benefit of recording on a Macintosh is the ease of
producing audio disks... When the boss sees that you can record disks
invariably the first question will be if you can make a disk for his stereo
-- Gotta keep the boss happy, too! :)]

---------

Date: Wed, 2 Oct 1996 12:03:26 -0400
From: Dan Schwartz <dan@SNIP.NET>
Subject: CD/R; part trois

	Since I've already received two requests for more info, I'll point
everyone to an excellent, well-written White Paper from FWB about the
subject. I'm also pasting the Executive Summary below. If anyone can't get
it, I'll convert it to text and eMail it to you.


Source: FWB CD-R White Paper
URL: <http://www.fwb.com/whitepapers/wp-cdrwhitepaper.html>
Full White Paper available at:
<ftp://ftp.aimnet.com/pub/users/fwb/white_papers/cd-rwhitepaper.sea.hqx>


			      CD-R White Paper
			      Executive Summary

The desktop CD-Recordable drive market is relatively new. A combination of
technical refinements, material cost reductions and growth in the installed
base of CD-ROM readers has enabled desktop CD recorders to become viable
only within the last twelve months. This evolution of the technology is
bringing about a revolution within many areas of personal computer usage,
including electronic publishing, multimedia, imaging and archiving.

This revolution, as well as continued reductions in price, improvements in
performance, and enhanced ease-of-use, are going to lead to one of the
steepest growth curves ever experienced by a data storage product. We are at
the beginning of a huge opportunity for those who will manufacture, resell,
integrate or use CD-R technology.

This document is meant to serve as an introduction to the history, benefits,
and techniques of desktop CD recording. It provides guidance for those who
are interested in evaluating, purchasing and using CD-R products. Finally,
it supplies detailed information on FWB's current CD-R solutions. We hope
that this brief proves to be a useful tool for you, and encourages you to
get more involved in this exciting new technology.


	One last thing: I tried Retrospect 3.0 for backing up to CD; but I
don't care too much for the way it writes to a CD in it's compressed
format... I'm just a bit gun-shy about having anything proprietary; and
Retrospect's format also disallows mounting the CD's in a jukebox to easily
share them across the network for near-line storage.

------------------------------

Date: Wed, 9 Oct 1996 09:22:07 +0000
From: Eric White <Acuity@HOOKED.NET>
Subject: some thoughts on ArcServe 6

I've just upgraded to ArcServe 6.0 for NetWare, and figured others might
benefit from my experience.

I am new to the ArcServe products, and have had no training or introduction.
I found ArcServe 5.01(g) to be rather troublesome and difficult to get and
keep running.  I am in a multi-server environment, with a ADIC 12 tape
changer which seemed to complicate matters a great deal.  Cheyenne's tech
support was, however, very helpful and once I got the system running it did
run fairly smoothly.

I installed ArcServe 6.0 (Enterprise edition) on a fresh new 4.1 server and
was immediately impressed with the installation process.  Notable
improvements include the fact that install acurately identified both my
changer and the tape drive, and configured the SCSI information
automatically (yes!).  No need to manually configure anything, just give
it a group name and move on.  The install process is solid, thorough, and
walks you through the entire process.

Once I got the upgrade changer support (version 3.0), the changer option
installation was equally simplified.  *Warning* According to Cheyenne, DO
NOT use changer support version 2.x with ArcServe 6.0 unless you want your
server to abend regularly.  The upgrade is absolutely required.

Once installed it was time to start ArcServe on the server.  I love the NLM
loader screen that accompanies the Astart process.  No longer do you get
but a fleeting glimpse of error messages as the fly past the console
screen.  The entire load process is logged in the loader screen, which
allows you to scroll through it, and automatically unloads 5 minutes after
completion.

I used the ArcServe Autopilot to configure my backups, choosing
differential/Full routine.  The interface is much inproved, giving you
increased options and additional views (I particularly like the calendar
view, which shows what backup will happen on what day for the entire
month).  You can override many of the autopilot options, or just create a
custom job of your own definition.

Reporting is also improved, giving more details than previous versions and
quick status checks with green, red, or yellow buttons associated with a
job.

All in all, a terrific upgrade.  Much easier to use, simple to configure,
and (so far) extremely reliable.  The entire installation, start to finish,
without a single need to call tech support (even for a newbie to the
product!). If your considering this upgrade, I'd reccomend not hesitating.
I think you'll be impressed with the changes.

------------------------------

Date: Fri, 18 Oct 1996 21:25:17 +0100
From: Peter Stromblad <Ps@MSEK.LTH.SE>
Subject: Solved: unknown compression format

Thanks for your responses, problem partly solved.
Below is the incident log.
Cheers,
Peter S.

Date       Status                      Comments
10/18/96
	   Cust Left
	   Msg
			Installing the patches within 410pt6 solved
			the second access to a file that had the
			unknown compression format broadcasted.
			Thus I wrote a small program to check the
			files, and now I know what files to
			replace. With the program I've checked
			other volumes as well and they have no
			faults what-so-ever. VOL1 was being backed
			up when we had the many server abends and
			this is the only reasonable source of
			malfunction regarding the files.
			Apparently some of the files have been
			decompressed faultly though and only a
			binary compare with correct ones will fix
			this, i.e. the NOS will not report that the
			file is either corrupt nor can't be
			decompressed.
			Cheers,
			Peter Strvmblad
10/18/96
	   Solution
	   Suggested
			More info / suggestions:
			The problems mentioned previously occurred
			at the same time they upgraded to Arcserve
			6 from version 5 (or very near it). They
			feel strongly that their problems are
			related to the Arcserve upgrade and an
			incident that occurred shortly after in
			which another admin tried running arcserve
			with the astart.ncf file (the old one for
			Arcserve 5) instead of astart6.ncf. From
			that moment on they started seeing the
			compressed file problems. Suggestions: 1.
			Scan for viruses on the DOS partition.
			Viruses have been known to cause file
			compression probs. 2. Replace server.exe
			and all disk drivers on the DOS partition.
			A corrupt server.exe / disk drivers have
			also caused corruption / compression
			problems. 3. Apply 410PT6. Peter says they
			already have the other updates. 4. Send in
			config.txt. I will look it over get back
			with Peter.
10/18/96
	  Cust Left
	  Msg
			After receiving error message directory
			where the file resides is locked from
			further access. Files in other directories
			can be accessed. Only timeable solution to
			access server again is to reset server and
			restart. After restart immediate deletion
			or overwrite of files signalled to be
			faulty on previous attempts is succesful.
10/18/96
	  Cust Left
	  Msg
			Exact fault on server console is
			date time: Server-4.10-3149
			Compressed file vol1:/apps/windows/dosx.exe
			being decompressed for station 3 uses
			unknown compression format. Compress screen
			displays at first access * DOSX.EXE
			1174499073 -519967743 1 10009
			Second access to file makes server useless,
			reset the server and the file can be
			deleted, compress status flag has then been
			removed. If there is no util that fixes
			this automatically I want to access the
			files one by one, when the server signals
			3149 I'd like to remove the file without a
			call to the uncompress algorithms and
			replace the file from a healthy server.
			Problem remains, I'll call back tomorrow.
10/18/96
	  Cust Left
	  Msg
			Received and tried remfile.nlm,
			only supports removal of files within the
			secure directory, not files on other
			volumes. Problem remains.
10/17/96
	  Solution
	  Suggested
			Has compressed files that are corrup, upon
			first access broadcast is sent from server,
			compressed file uses uncompressed file
			format, and file is neither copied or
			deleted (have tried dos copy, delete, and
			ncopy). During second access of file there
			will be no error msg, again file is neither
			deleted or copied, and monitor on server
			will hang. Will then toggle to console, if
			issues down all volumes dismount except
			volume where corrupt compressed file
			resides, and during dismount server will
			abend w/ gppe, rp: serverXX.

---------

Date: Fri, 18 Oct 1996 14:01:46 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: Solved: unknown compression format

>Thanks for your responses, problem partly solved.
>Below is the incident log.
------
	<detailed reports omitted>
	Whew! What a mess.
	So there is checking of header information of compressed files, and
if the results fail then the file is declared unusable. Sure looks like
Drew's way of doing business, thank goodness.
	A lesson here is there are many things which can and do happen to
data being moved from a disk drive to the net/tape drive, and checksums
are needed to reduce corruption (can't be totally eliminated, I can tell
folks why if there is demand). From the omitted details we have available
weak disk handling software plus the intrusions of Arcserve running at the
same time, plus whatever the machine's hardware wants to do as "help." Alas,
checksums enroute are expensive in time, and we all know that "Performance
is Everything." The same can be said about computer memory SIMMs, those with
no parity, those with only one bit of parity, and the good stuff with ECC
detection and correction.
	And another lesson, for the tape backup vendors in the audience.
Tape backup software MUST be able to verify a backup by comparing tape
data with original data, not just scan the tape for errors later on.
	Personally I never use disk file compression on NW servers. It's a
disaster waiting to happen, and disks are cheap this year. But of the various
ways disk compression has been attempted, Drew Major's NW rendition seems to
be the slickest.
	Finally, a more subtle lesson is critical data should be backed up
by more than one method. Trust not a single program! When rebuilding servers
I will often do a DOS copy/xcopy/pkzip of vital items to other machines,
in addition to running tapes through recorders by at least two different
programs.
	Joe D.
P.S. Who is Drew? Novell's NetWare "Chief Architect."

------------------------------

Date: Mon, 21 Oct 1996 20:05:36 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: SMS data migration over the net ?

>>I was wondering if there is a way of NW  4.10  data migration to a tape
>>or optical media over the net (like to a networked server drive).
>>
>>What is the servers overhead in having data migration to handle large
>>storage on slow media (like 10 - 30 GB)?
---------
	So far as I know, being a non-user of the facility, NetWare file
migration occurs to a device or devices attached directly to that particular
file server. There is no facility to move files across the net to another
server (think of the security, directory structures, overall bookkeeping
headaches). That's today.
	Think ahead two or three years. By then distributed file systems will
become available to our lans, hopefully. File migration then has another way
of acting, if customers and designers think it is important enough to invest
engineering time to implement. [Maybe DSREPAIR will be renamed SITEREPR.EXE
instead, given the way NDS couples things. Smiley goes here.]
	Server overhead is handling a particular device, so you'd have to
look at that combination first hand to know what overhead that device entails.
Probably little, more like dealing with a CD-ROM at worst. Because the facility
is present now you have a chance to run tests and get meaningful answers at
your site with your equipment (plus borrowed high capacity storage system/HCSS
device).
	Joe D.

------------------------------

Date: Thu, 24 Oct 1996 10:35:48 -0400
From: Steven Sathue <SATHUEST@TOKIOM.COM>
Subject: BackupExec -Reply

>I am looking for suggestions for increasing the speed of Backups. I am
>using BackupEXec for Netware Windows Workstation Edition. The
>computer that it is running on is 486dx 33 with 8meg ram and a 40meg
>hard drive.

I'm sure you'll receive more "powerful" suggestions from other members
of the list (e.g. get a faster PC with more memory, etc.), but here's
something that might give you some minor performance increase that's
cheap...

I issue the following in my backup server's AUTOEXEC.NCF:

;; OPTIMIZE ENVIRONMENT FOR BACKUP OPERATIONS
set dirty directory cache delay time = 10 seconds
set dirty disk cache delay time = 10 seconds
set immediate purge of deleted files = on
set turbo fat re-use wait time = 1 hour
set directory cache allocation wait time = 2 minutes
set directory cache buffer nonreferenced delay = 3 minutes 45 seconds
set maximum directory cache buffers = 2000
set maximum concurrent disk cache writes = 20
set maximum concurrent directory cache writes = 5

I based these SET's on the recommendations made by the program
"NetTune", by Hawknet, which I evaluated about a year ago.

I have yet to do any "time trials" but casual inspection of my ARCserve
Activity logs when I first "installed" the above showed some modest
improvement in backup speeds (from 10mins to around 1 hour, I think,
depending on job type, size, etc.).

My server is dedicated for backups. If your server performs other
functions, the above SET's may adversely affect its performance.

------------------------------

Date: Fri, 25 Oct 1996 10:26:00 -0400
From: "QUIBELL: MARC" <txh8205@TEXCOM-HOOD.ARMY.MIL>
Subject: Re: BackupExec

<lengthy discussion on PC backup vs. server backup>

Putting an elaborate tape array system using a striping algorythm on a file
server is a million times faster that using one tape driver on a remote PC.
Server-based backups are also faster than PC-based, due to the
distance-data-travelled equation. We have an 8-drive tape array, that can
backup more than 13GB of data in an hour. At least make the backup system
attached to the server...

------------------------------

Date: Tue, 29 Oct 96 13:50:34 -0800
From: Randy Grein <rgrein@halcyon.com>
To: "NetWare 4 list" <netw4-l@bgu.edu>
Subject: Re: Backup - Workstation or Server?

>A salesman just told me that a tape drive must be mounted on the
>server in order to back up NDS. Is this true? I have always used
>server-based tape drives, but I am curious to know if a
>workstation-based solution would work.

Not true, but only technically. There is an unsupported public domain
application (both NLM and exe versions) that will backup NDS. Palindrome
developed it for 4.0 some time ago. It is NOT, however considered a
production application, and NDS is only one of the many reasons to not
rely on workstation based backup. Unless you count NT workstation there
are no workstation based backup systems that are recommended for 4.x
servers.

------------------------------

Date: Wed, 30 Oct 1996 15:13:18 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: Documentation for prioritizing 4.1 threads?

>My systems engineer and I have found that we need to tell Netware 4.1 that
>backing up is a higher priority than TTS or automatic compression. We were
>experiencing corrupted binderies, and pin-pointed the problem to our Arcada
>backup trying to backup while Netware is trying to compress.
>
>Our solution is to prioritize the threads running, but we cannot find
>documentation telling us the names of all the threads. Can anyone point me in
>the correct direction?
-----------
	I recall that it's by task number, not by name. That means finding
the number manually ever time the server is booted. Much better would be
an NLM doing floating priorities, like "real operating systems" but tailored
for the file servering environment.
	The easiest way out of your situation is to schedule backups and
compression in non-overlapping intervals. Alternatively, do away with
compression. TTS must run or NDS stops cold.
	Joe D.

------------------------------

Date: Thu, 31 Oct 1996 08:12:13 -0600
From: "Mike Avery" <mavery@pernet.net>
To: netw4-l@bgu.edu
Subject: Re: ARCserve 6.0

>>Wait until you need to rebuild your server when it crashes.  You may wish
>>that you had used another system because you will need to load the NOS
>>and the backup software before you can even get started.

That's not actually true.

With both Stac's Replica and Columbia Software's SnapBack recovery is
quite simple.... here's the summary:

1.  Repair or replace any damaged hardware,

2.  Boot with a recovery diskette,

3.  Load from backup tapes, and

4.  Reboot the server.

Both products, and I think several others, do a physical level
backup, so they restore all the partitions and the boot block.  Once
the restore is done, you are good to go.  These products have to have
a tape drive directly connected to the server they are backing up at
this time.

Also, some of the other biggies offer single disk recoveries without
tying you to a physical backup.  Symantec used to have one before
they got out of the network backup business (a pity, imho), and I
believe Legato offers the same functionality.

------------------------------

Date: Wed, 6 Nov 1996 12:50:43 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: [1] [2] [3]

<snip>
>3) Why are everybody so anxious about not using SBACKUP? My backup media
>is a SCSI tape drive which works well with TAPEDAI and SBACKUP seems to
>be <knock wood! knock wood!> relatively working.

	Keep knocking, learn that it is undependable and version specific.
	Joe D.

------------------------------

Date: Sat, 9 Nov 1996 17:16:46 -0500
From: Dan Schwartz <dan@SNIP.NET>
Subject: Contributions needed: No $$$!

I'm assembling a short Web-based tutorial on backing up and archiving...and
this will apply to all desktop platforms. The page is at:

	http://www.snip.net/users/dan/backup.html

I don't pretend to know *too* much.  If anyone has any additions
please eMail them to me! <mailto:Dan@snip.net>.

------------------------------

Date: Mon, 2 Dec 1996 09:26:56 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: NDS Backup (2)

>>You actually have to do both in combination if you use DSMAINT, It only
>>backs up the NDS objects, no file rights/ownership.
>
>>DSMAINT Works pretty good. Done it here three times now to upgrade 3
>>different servers hardware.
>>
>>Before you run DSMAINT read the directions that come with it. When you run
>>it no one will be able to log into the server, for it closes the directory
>>services on that server until you run DSMAINT again to restore the NDS.
>>here are steps I would do (if SYS: is your only volume on the drive).
>>
>>1. Use ARCSERVE to backup the volume. (I suggest make two backups - on
>>   one of my restores I had a bad backup - that wasn't fun :-))
>>2. Run DSMAINT
>>3. Copy NDS file DSMAINT created to floppy, workstation or another server
>>4. Replace drive
>>5. Install Netware with NDS in its own tree
>>6. login into that tree copy NDS file and DSMAINT to new volume system
>>   directory (stay logged in so you can do a restore)
>>7. Remove the 'temp' NDS with install
>>8. Run DSMAINT to restore NDS
>>9. Restore from ARCSERVE
>>
>>Paul Pavlicko
>
>I agree with Paul.  I replace the HD in our server using DSMAINT and it
>went very smoothly but:
>
>It's un-intuitive to create a temorary NDS as Paul mentions in
>step 5 above, but IT'S NECESSARY.  It gets deleted in step 7, so
>you can call it whatever you like.
>
>Scott Wiersum
	I had to rebuild a server over the holiday when its NDS files became
corrupted. DSREPAIR could not fix the files, DS.NLM refused to even look at
them, so the files were locked and no NDS was active on the server. DSMAINT
would not help since the point of no return had been passed.
	I did a fresh reinstallation the server but into a new temporary tree.
Afterward Install was used to remove NDS and then put the server back into the
regular tree under its old name. Tapes restored user data. The reason for using
a temp tree in this case was to let all the software components be installed
on the server *before* becoming involved with NDS synchronization exchanges.
	Joe D.

------------------------------

Date: Wed, 4 Dec 1996 13:18:19 EST
From: Sam Martin <Sam_Martin@G1.COM>
Subject: Re: Replacing SYS: on 3.12

>I know there have been a lot of humorous references to replacing SYS
>lately, but I am now in the uncomfortable position of having to do just
>that...Here is my plan of action, PLEASE let me know if I've missed
>anything, or if there's anything in particular I need to look out for.
>
>1. run bindfix 3 times (to get current old files)

xcopy sys:system\net*.old a:  when you're done.

>2. purge the volume

H'mmm ... good idea.

>3. back up sys (twice, using arcserve 6)

each backup to separate media.

>4. delete the volume

about the same sensation experienced when signing the dotted line on
your home mortgage.

>5. repartition the drive (this is why I had to do it in the first place)

Seems to me you have to take the server down at this point, don't you?
See comments on Step 8

>6. recreate the volume

add name space at this point ( if any)

>7. restore

restore the bindery first, then the rest of the volume

>8. hope

...that you have server.exe, lan and disk drivers, install.nlm,
vrepair.nlm, any name space you're supporting, bindfix, bindrest, net*.old
from sys:system, startup and autoexec.ncf on floppy, along with a DOS boot
disk w/fdisk, format and sys.com, edit.com and qbasic.exe, so edit can run
from floppy, your favorite disk editor. Just for drill, keep a copy of
remote and rspx.nlm handy too. This list (w/add-ons specific to my
hardware)serves me as an emergency toolkit. If the disk contains the boot
DOS partition, you'll need the DOS boot diskette for sure. Maybe use jcmd
or nwshell to copy the DOS partition to NetWare, and back that up as well.

My experience tells me that this kind of reconfiguration is not a big deal,
as long as you have solid backups.

---------

Date: Fri, 6 Dec 1996 09:02:27 +1000
From: Mark Cramer <m.cramer@QUT.EDU.AU>
Subject: Re: Replacing SYS: on 3.12

>>I know there have been a lot of humorous references to replacing SYS
>>lately, but I am now in the uncomfortable position of having to do just
>>that... Here is my plan of action, PLEASE let me know if I've missed
>>anything, or if there's anything in particular I need to look out for.
>>
>>1. run bindfix 3 times (to get current old files)
>>2. purge the volume
>>3. back up sys (twice, using arcserve 6)
>>4. delete the volume
>>5. repartition the drive (this is why I had to do it in the first place)
>>6. recreate the volume
>>7. restore
>
>I think you must first change the order of your action.:
>
>1. purge
>2. vrepair
>3. run bindfix until no errors
>4. save bindery-files (old) to local hard-disk
>5. backup
>6. delete the volume
>7. repartion
>8. recreate
>9. restore
>10. bindrest

If you do it this way, you should pray, you won't have either trustee
assignments or queue directories and you'll lose all file ownerships.

FLIP STEPS 9 AND 10, and you should get trustee assignments and file
ownerships back. Copying queue directories takes extra steps. BTW, it's a
lot easier to do this kind of thing over the wire to another server or
even to another volume on the same server.

------------------------------

Date: Thu, 5 Dec 1996 10:23:36 -0600
From: "Lindsay R. Johnson" <lrj5@IAONLINE.COM>
Subject: Re: Tape backups on SFT III servers

>I'm about to purchase (in the next few weeks) a pair of servers running
>Netware 4.11 and SFT III.  I intend to back up to a DAT Drive with a
>4-tape hopper using either Backup Exec or Arcserve.
>
>I've checked the FAQ which says that Arcserve 6 doesn't work and my
>reseller says that Backup Exec is very slow.
>
>Is this correct ? Can I do tape backups of my SFT III servers?

My research discovered that no server-installed tape backup solution is
supported on 4.x SFTIII.  Period.  Do it over the wire.  My discussions
with Cheyenne's product manager (ongoing) communicates a dedication to
having a supported product.  It will likely be available for the 4.11
platform first.  Initial beta-testing should begin soon.

This status killed our SFTIII deployment as high-throughput tape backup is
required...mission-critical apps tend to have a lot of data...  I feel this
is a huge failing.  It's implied that Novell's changes to 4.11 will allow
a more stable implementation of a server-based backup.  Time will tell.

If you can stand doing over-the-wire, AS6's push agent is supported on
4.10 SFTIII.

------------------------------

Date: Sat, 7 Dec 1996 08:19:10 -0500
From: Dan Schwartz <dan@SNIP.NET>
Subject: Disk allocation

>Just wondering - what kind of network disk space do you all allocate to
>a user for their home directory.
>
>I've been having some heated discussions with users who constantly want
>more space.  They know that I back up the network every night, and they
>don't want to have to mess with backing up data on their on hard drive,
>so they just stick it out on the network.

	This harkens back to mainframe days.

	The best answer is all the user needs -- It shouldn't be up to each
individual user to back up his/her own HDD. By having a central server all
the backups are done at once, on a regular schedule, in a central location
by professionals.

	When you have users doing their own backups and archiving it:
	A) Cuts down on their productivity;
	B) Causes a nightmare with everyone using something different (Zip,
QIC, SyQuest, CD/R, etc...) as their backup media;
	C) Can cause data loss as individuals are anywhere from nonchalant
to anal about their backups;
	D) Is not as efficient as having one person backing up for hundreds
of users in one shot.

	I take the attitude to idiot-proof installations: This is the
philosophy behind the management software in DEC's Multia's, and now NT 4.0
and `95 -- Essentially "locking down" the configuration. Backing up should
be the same way: Part of a service offered by the people running the server.

>Initially, whenever I create a new user, I give them 10 (ten) megabytes
>of space.  Is this average, way too low, just right?
>
>Do you handle different people differently, depending on their rank and
>title?
>
>Do you have written documentation explaining disk storage procedures?
>
>Does anyone have a policy for automatically deleting all files older
>than 6 months, 1 year, etc.

	Generally, in the magazine & periodical prepress industry files are
archived to DAT about 3 months after going to press; or about a month if
archived to CD/R due to it's faster access. This is done so the layout crew
can pull up old ads and rerun them.

	Every industry is different when it comes to archiving: The
questions to look at are

	1) How often will the archived data need to be accessed?
	2) What is the access time of retrieved documents?
	3) Is the archive media rewritable?
	4) With hard disk drive prices rapidly approaching 10½/MB
($100/gig) does it pay to just keep adding cheap hard drives rather than
archive?

>Just wondering how other network managers handle these situations.

	Personally, I recommend CD/R for most all archiving; with seven $39
SCSI 2x CD players in a tower. This gives up to 4.5 gigs of archiving for
under $500 (including tower case & cheap SCSI card). This yields a cost of
about $110/gig; with the bonus that the data is permanent on disk and will
not get corrupted, stretch or break.

------------------------------

Date: Sun, 8 Dec 1996 16:26:21 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: BkExec fine with 90M tapes, abends with 120M

>Has anyone seen this before?  On one of our servers, we recently
>switched over to 120M DDS-2 from 90M tapes and found that the server
>would abend everytime it tries to backup.  There were no problems at
>all when using 90M tapes.
>
>This is a NW3.12 server using AHA2740, Backup Exec 5.01, HP SureStore
>6000.
--------
	Peculiar indeed. I use an Adaptec 2740 (2742AT to be precise)
controller on a NW 3.12 server, an HP SureStore 6000 DAT tape drive,
but Backup Exec v7.01a. That combination accepts 120M tapes gracefully.
	I don't know which vendor made the tapes in your case. Mine are
HP brand (rumored to be made by Maxell). A standard tactic here is to
fully erase virgin tapes before using them for backups. That's an option
on the BackupExec console menu.
	And, the server is stressed harder with 120M tapes because the
DAT system switches to DDS-2 mode for higher transfer rates. It could be
your SCSI bus, disk drives, or even motherboard are not happy with the
higher rate. The usual comments about SCSI cable quality and termination
apply here. Recall that HP DAT drives have no internal termination so use
the external active terminator shipped with the drive.
	Finally, BE v5 has a fast file finder mode which eats server cpu
resources like crazy. I turned it off when I used v5; it defaults to off
in v7.
	Joe D.

------------------------------

Date: Mon, 9 Dec 1996 21:38:12 +0000
From: John Wells <jwells@PACIFICCOAST.NET>
Subject: Re: Tape backups on SFTIII servers

>>I've checked the FAQ which says that Arcserve 6 doesn't work and my
>reseller says that Backup Exec is very slow.
>
>Don't believe your reseller. We have Backup Exec v7.01b running on a
>v3.12 Pentium 75, with an Adaptec 2940 PCI SCSI card, connected to a
>Quantum DLT4000 tape drive. Very speedy indeed! (even for remote
>servers across the wire) We backup one SFTIII pair at 20
>megabytes/minute (1.8GB backed up in 1.5 hours)

My experience says that 3.12 and 4.1x are very different animals when it
comes to Backup Exec. I was getting 20 to 23 MB/min for over the wire
backups running BE 5.01e under 3.12 (on both backup and remote servers).
After upgrading one remote server to 4.1, the remote backup speed for that
server dropped to 3 MB/min. Upgrading BE to 7.11a brought this up to about
10 MB/min, where it remains. This is usable but not what I'd call speedy.
The comments from this list as well as from Arcada/Seagate are that this is
the likely the best I can expect.

(Can anyone explain why the overhead (SPX?) is so much greater between a
4.1 and a 3.12 server compared to between two 3.12 servers?)

------------------------------

Date: Thu, 02 Jan 1997 22:00:37 -0600
From: Darwin Collins <dcollins@fastlane.net>
To: netw4-l@bgu.edu
Subject: Re: NDS backup without tape device ??

>Does anyone know if there is a way to backup the NDS properties of
>NetWare 4.x, that is comparable to the bindfix program from NetWare 3.x?

Peter Kuo has an utility that may help:

	http://ourworld.compuserve.com/homepages/dreamlan/ndsdir.htm

---------

Date: Fri, 03 Jan 1997 08:29:39 -0500
From: Sherri Colon <SHERRI@cobraelec.com>
To: netw4-l@bgu.edu
Subject: NDS backup without tape device ?? -Reply

There is a backup program used in the Netware 3 to 4 upgrade class called
EMMIF. It allows you to backup NDS to a file by making the file look
like a tape drive.

---------

Date: Fri, 3 Jan 1997 08:03:32 -0800
From: "Jay A. McSweeney" <mcsweej@pqwy.afsv.af.mil>
To: <netw4-l@bgu.edu>
Subject: Re: NDS backup without tape device ?? -Reply

I seem to recall that EMMIF worked well, but that it's a really unstable
NLM.  I don't believe I can recommend that for a production environment.

---------

Date: Fri, 3 Jan 1997 10:07:51 -0500
From: RBall84213@aol.com
To: netw4-l@bgu.edu
Subject: Re: NDS backup without tape device ??

>Does anyone know if there is a way to backup the NDS
>properties of NetWare 4.x, that is comparable to the bindfix
>program from NetWare 3.x?

Download the JCMD utility (jcmd.zip?) from ftp://netlab1.usu.edu/.
This utility creates a readable copy of subdirectory sys:_netware
that you then can copy to diskette.

	ftp://netlab2.usu.edu/sys/anonftp/apps/jcmd_135.zip

---------

Date: Sun, 05 Jan 1997 11:51:34 -0600
From: Darwin Collins <dcollins@fastlane.net>
To: netw4-l@bgu.edu
Subject: Re: NDS backup without tape device ??

>>>>Does anyone know if there is a way to backup the NDS
>>>>properties of NetWare 4.x, that is comparable to the bindfix
>>>>program from NetWare 3.x???
>
>>>Download the JCMD utility (jcmd.zip?) from ftp://netlab1.usu.edu/.
>>>This utility creates a readable copy of subdirectory sys:_netware
>>>that you then can copy to diskette.
>
>>I couldn't find it.   Please provide some more hints.
>
>It's at ftp://netlab2.usu.edu/sys/anonftp/apps/jcmd_135.zip

Thanks for the info.   This utility (method) would not be able to
backup/ restore trustee definitions, but, it could be very usefull.

IMHO, its really to bad, that SBACKUP does not have the ability to
'write' to a disk file.  I used NBACKUP alot with 3.x for 'quicky'
stuff.

------------------------------

Date: Wed, 8 Jan 1997 11:38:02 +1000
From: Greg J Priestley <Greg_J_Priestley%PKF@PKF.COM.AU>
Subject: Improving restoration times - tip.

Today one of the servers here (a Compaq Proliant 5000 with RAID 5 4 x 2 Gb
(6 Gb usable)) had one of its volumes corrupted through a crash.  VREPAIR
diligently cleaned up the drive but identified a large amount of corrupt
files (basically 2000 VRepaired file in the ROOT - a dozen or so files
reporting to be 4 Gb in size!).

Because of the extent of the damage we decided to restore from the full
backup from lastnight (Arcserve 6 to a Compaq DLT drive).

Whilst doing the restore, we were averaging just over 50 Mb/minute - a
little slow when 5 Gb is involved.  Looking at the server console the
processor was stuck at 100% with the dirty cache blocks approaching the
maximum.

I've always known about SET MAXIMUM CONCURRENT DISK CACHE WRITES but never
have received too much benefit because of the small data sizes I've been
playing with.

I set this to 200 (from the default of 50) and the throughput immediately
jumped and in the end averaged over 75 Mb/minute (and being an average was
realistically higher say 80-90Mb/min).  Increasing this value saw the back
log of dirty cache writes reduced to a manageable level and the server
utilisation drop to around 15%.

Some difference!  Next time you have to restore large volumes of data, I
suggest you increase this value (at least temporarily) during the restore
and save some time and keep some users happy.

------------------------------

Date: Sat, 11 Jan 1997 20:54:32 -0600
From: Richard French <RFRENCH@wpsmtp.siumed.edu>
To: netw4-l@bgu.edu
Subject: Re: Server based Tape Backup -Reply

You may want to look at Open File Manager as well for your backup.
It allows you to do backups while files are open and even hold
synchronization for those groups of files that may need to be backed up at
exactly the same time.  I have had zero problems with its implementation
and the cost was very reasonable.

------------------------------

Date: Mon, 13 Jan 1997 17:32:06 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: Migrate

>I am trying to copy  data from one volume on a 3.12 server to another 3.12
>while maintaining file ownership and rights. Ncopy and copy can physically
>move the data however all the data is credited to my account. <snip>
----------
	Ownership and trustee rights are held in the file system, and they
are stored as numerical idents. The bindery has conversion from numerical
to text idents. Ncopy/xcopy/copy/etc do not move this extra information.
Tape backup programs can. But when the information ends up on the other
side the other side's name-number tables are completely different than on
the source machine. Uh oh. The numbers are server-specific. Numbers are
used because they are compact and readily compared; text strings would
consume huge amounts of file system space and be slow to compare.
	This name versus number being strictly local to the server is true
for NDS as well, and is one of the reasons why tape restores of NDS material
is a problem on NetWare before 4.11. NW 4.11 has the good sense to record both
and use what's appropriate.
	So, use a copy program if you wish, and follow it up with a FILER
session to reassign ownership and rights by name on the destination system.
There are utilities from John Baird and Wolfgang Schreiber which can read
ownership and trustee rights to text files, suitable for replaying to reassign
them. Please look in directory apps on netlab2.usu.edu.
	Joe D.

------------------------------

Date: Sat, 18 Jan 1997 19:50:56 -0600
From: "Mike Avery" <mavery@mail.otherwhen.com>
To: netw4-l@bgu.edu
Subject: Re: Need Backup Device Recommendation

>I am bringing about 100 Gigabyte online soon with new Netware 4.10
>servers and need a new backup solution.  I am under some time
>pressure (surprise) and am looking for your recommendations for a
>backup hardware and software solution.  I am inclined toward using
>one or more DLT tape drives with auto changer, controlled through a
>Windows based application.  We don't currently use any MAC or OS/2
>name spaces but of course want to backup the NDS database. Can you
>tell me of a system you are currently getting good results with for
>a task of this magnatude?

100 gigabytes is, of course, a lot to back up.

A changer MAY be your best bet, but it may not be either.

If you are going to have one server with 100 gigbytes on it, then you
will be pretty much limited to a single backup session, and a changer
is as good as it gets.

If, on the other hand, you have a number of servers, you may be
better off with several tape drives.  100 gigs could be stored on 5
20 gig devices.  The advantage there is that you can be doing backups
to five tapes at once and gain some advantage from parallelism.

With that much data to backup, it would also pay to look at your
backbone.  With a single server, the backbone is probably your EISA
or PCI bus, and the speed of the backbone should not be a problem.
(Please don't tell me the new server is an ISA bus, I just don't
wanna hear it, OK? <g>)

If you have multiple servers, then the backbone becomes your LAN
connection.  At 10mbps, with 100% efficiency on 10mbps ethernet, your
backup time is around 30 hours.  You won't, of course, see 100%
efficiency for a lot of reasons.  Moving to a 100mbps topology to
connect the servers would greatly help get the backup done in a more
reasonable time frame.

Some backup packages interleave backup sessions to improve
throughput.  Legato does this on their NetWorker product.  This
improves performance, but decreases the benefit of multiple drives.
Multiple drives are great under ArcServe.

Legato has a 30 day evaluation package on their web site
(http://www.legato.com/), and I like Legato a lot - although I have
not used it in that large an environment.  Setting it up is about as
much fun as a trip to the dentist, but once it's set up, it just
does its job.  I've never had an abend I could trace to Legato's
NetWorker, and I've never heard of one with the shops I support or
the installations a friend has made.

You may want to ask the other guys about evaluation packages and try
them out and see what you think.

I suggest you look at the math concerning your layout and backup
speeds and then make some decisions.  BTW - I suggest you not use
vendors rated transfer speeds as a guideline.  Too much depends on
the rest of the network, the performance of the servers, and so on.

------------------------------

Date: Tue, 21 Jan 1997 11:01:49 -0500
From: Richard Boys <rmb@umich.edu>
To: netw4-l@bgu.edu
Subject: Re: Disaster Recovery Planning - Reply -Reply

Cheyenne just released a beta version for disaster recovery to go with their
Arcserve 6 product.  Check their web site, and here is the PR.

>>>
The purpose of this mail is to notify users that the Disaster Recovery
Option for ARCserve 6 for NetWare is now in Open Beta.  The Disaster
Recovery Option plugs into ARCserve to greatly reduce the time and effort
required to recover your servers after a disaster.  With a few diskettes
and your last full backup, ARCserve and the DR Option will completely
rebuild the server including;
- Repartitioning the drive
- Restoring non-NetWare partitions
- Restoring data and all applications
- Putting you back on the network

The Open Beta can be downloaded from Cheyenne's website at
http://www.cheyenne.com/Register/dropenbeta/dropen.html or simply follow
the links from our homepage.

Please note The Disaster Recovery Option requires the Service Pak.  If you
have not yet downloaded the Service Pak please do so and apply it to your
ARCserve installations prior to using the Disaster Recovery Option.
<<<

------------------------------

Date: Sun, 2 Feb 1997 14:38:34 +1300
From: "Baird, John" <BAIRD2@WHIO.LINCOLN.AC.NZ>
Subject: Re: Bindery backup

>Run BINDFIX twice, then copy the three (3) .OLD files from the system dir.
>To restore (if needed) copy the .old files into the system dir and run
>BINDREST.

True, this will provide copies of the bindery for backup, but unless
you have no more than a few dozen users on the server, running bindfix
is not something you can do without disrupting user activity. Bindfix
closes the bindery while it is rebuilding it and this prevents any
activity requiring bindery access. The period of closure can be up to
30 mins+ if you have more than 1,000 objects in the bindery.

An alternative is to use bindclos and bindopen from JRButils to allow
copying of the bindery files using ncopy (which is preferable to copy
or xcopy as it will do the copy internally to the server providing you
copy the files to somewhere on the same server).

A question that needs to be asked - why does the person who posted the
original question not have a Netware aware backup system which is
backing up the bindery files?

---------

Date: Sun, 2 Feb 1997 18:52:00 +0100
From: "Arthur B." <arthur-b@ZEELANDNET.NL>
Subject: Re: Bindery backup

>>Run BINDFIX twice, then copy the three (3) .OLD files from the system dir.
>>To restore (if needed) copy the .old files into the system dir and run
>>BINDREST.
>
>True, this will provide copies of the bindery for backup, but unless
>you have no more than a few dozen users on the server, running bindfix
>is not something you can do without disrupting user activity. Bindfix
>closes the bindery while it is rebuilding it and this prevents any
>activity requiring bindery access. The period of closure can be up to
>30 mins+ if you have more than 1,000 objects in the bindery.
>
>An alternative is to use bindclos and bindopen from JRButils to allow
>copying of the bindery files using ncopy (which is preferable to copy
>or xcopy as it will do the copy internally to the server providing you
>copy the files to somewhere on the same server).
>
>A question that needs to be asked - why does the person who posted the
>original question not have a Netware aware backup system which is
>backing up the bindery files?
>
>John R. Baird

True, but I for one must be able to 'do the job' with standard utils
'cause I'll never know beforehand what utils are on the server
that needs to be fixed. Most customers aren't to happy when
you suggest to install some additional software they have
never heard about. How ever good it may be.

That said. Having the choice between running BINDFIX or
backing up the bindery with something like ArcServe I
would suggest both. Simply in the event that SYS needs
to be restored from scratch. What you want then is a
restored bindery *before* you start the restore of the data.
That can only be achieved by using BINDFIX/BINDREST.
If that troubles the users... so be it.

Else you can always work outside office hours.

------------------------------

Date: Wed, 5 Feb 1997 14:07:12 -0600
From: "Alan L. Welsh" <snapback@IX.NETCOM.COM>
Subject: Archiving vs backup issues

>Lately, our shop has had to start thinking about archiving
>issues. Requirements range from storing 30MB of dBase files for
>5 years, to storing imaging data (10-15GB per year) almost
>indefinitely.
>
>Possible archiving solutions
>
> - 4mm DAT (our current backup solution)
>   The tapes will last for 5-10 years if properly stored, but
>   will I still be able to find a functional drive? Will I be
>   able to find appropriate s/w to restore the files?
>
> - DLT (using this now to back up our imaging data)
>   Tapes are very durable, but I have the same h/w worries as
>   for DAT, to a lesser degree. My s/w concerns remain.
>
> - CD-R, i.e. writeable CDROM
>   (My current top choice.)
>   PROS- The media is supposedly good for ~100 years, and the
>   file format is well standardized; i.e. no backup s/w
>   comparability issues. Also, CDs are very entrenched in the
>   marketplace, so I expect that compatible hardware will be
>   around for a while.
>   CONS- Low capacity. 15GB of 40K TIFs would use a lot of CDs,
>   which takes a lot of time and effort to burn. I don't want
>   to become a disc jockey!
>
> - DVD
>   Sounds very promising, but it will take a year or two for
>   the standards to settle out for before anyone takes this
>   seriously.
>
>We currently plan to put the small stuff on CD-R; and keep the
>multi-gig stuff on-line for a year. Hopefully by that time DVDs
>(or the next Big Thing) will be ready for prime time.
>
>Also, I think that (at least) every 5 years, the archiving plan
>should be updated, and there is a good chance that it would make
>sense to re-archive the data. I just had the pleasure of copying
>the last byte of data from our stack of Bernouilli cartridges,
>destroying the media, and disposing of the last Bernouilli drive.
>You kind of had to be there, but "there was much rejoicing".
>
>PS -For more about DVD and especially CDs, try www.cd-info.com
>and www.kodak.com/daiHome/techInfo/permanence.shtml
>
>Peter Burtt

Here's Columbia's prescription for a perfect data preservation plan
(IMHO) utilizing backup as only one of the necessary tools.  Please
shoot some holes in it to make it better.  We believe that good
intelligent controversy helps create better technology.

Executive summary:

1.  Murphy LIVES!! Believe in him, he bites.

2.  Backup EVERYTHING at least once per day.  This means TRUE image.  You
could be glad years later when your present backup vendor discovers that
they should have been backing up data that was hidden from them, but
didn't, (remember when NDS was "hard" to back up?).  Or, your company had
an embezzler that you need to catch, who erased the files (but not the
data) every night before the daily "backup".  (Individual file retrieval is
done by mounting the tape and allowing end-users to retrieve their own
files).  Also, your server can be restored in under an hour to a new,
unformatted drive if it crashes and burns, preventing thousands of lost
man-hours at your company.

3.  Maintain copies of tapes OFF-SITE as follows: all dailies for at least
the last month, and weekly, make sure that at least one copy is frozen and
not reused, these backups are now ARCHIVAL copies, not backup.

4.  Do incrementals at least hourly using ??? or mirroring techniques such
as Vinca, Octopus, Etc., or worst case, use Netware's salvage.

5.  Always maintain data media redundancy with multiple copies of the same
data to prevent media errors from becoming probable. (remember Challenger's
TWO booster seal failures?)

6.  Do a complete disaster restore at least annually, putting the existing
drives aside for use only if there were a failure in your plan during the
first month.

7.  Choose well-entrenched industry standard backup hardware that has
cheap, reliable media that will be supported and exist as far into the
future as you can see today.  Odds for failure go up geometrically due to
media errors and mislabeling as the number of backup media pieces required
do--so try to fit your backup on ONE piece of media.

8.  Retension tape bi-annually to prevent data "bleed" from one tape
winding to the next.  Recopy the data to new media at least every five
years (tape).  Copy to MULTIPLE copies of optical when financially
practical and store in multiple locations.  Remember, just because the
media supposedly "lasts forever", you can still drop it or lose it!

9.  Review your backup and archiving plan annually and change as needed.
Today's storage devices will certainly not be in use 20 years from now and
you will need to migrate and protect your data according to the current
technologies at that time.

10.  Keep your current top management informed on your data disaster
planning and archiving.. if they don't appreciate your efforts and the
value of your company's data now, the new management team will!

------ This concludes our recommendations, but you may want to examine our
premises for them by reading our case study, otherwise, let us know what
you think and skip to the next message.


-- CDP's 21 year backup and archival history --
Here at Columbia we have had the problem of preserving "legacy" data from
the beginning of the company in 1976 when it manufactured tape-based
data-recorders, through its days as a major PC manufacturer, then as a SCSI
software house, to today's mission as a backup software publisher.
Obviously, we have been using a variety of hardware and software solutions
over the years for archiving.  Although we haven't done a perfect job
managing this, here is how we have managed to preserve the most of critical
data technologies, which still remain intact today.  Our views have been
biased by our own backup/archival experiences over the years, good and bad,
which have also dramatically influenced development of our backup technology.

The early software until 1980 mainly existed on floppies and was backed up
on, and recopied from time to time on new floppies using both file copying
and disk copying.  Great redundancy of the everyday storage medium, the 8"
floppy, along with off-site storage preserved the data.  At the same time,
our customer's tape data recorders had the ability to copy images of tapes
from one stand alone tape drive to another.

In the early eighties, these files were copied to the new winchester (5-10
meg) hard drives which could hold an amazing number of floppies.  The
drives were backed up, file by file, on floppies, multiple hard drives, and
later, the new QIC-02 tape drives.  Again, multiple copies of the files in
multiple places on different types of media preserved the files.

In 1987, our new mission was to write the code (device drivers and
utilities) that actually controlled reading and writing to all type of
storage devices (SCSI), for about 40 different operating systems.  This
dramatically changed the way that we had to perform backup.  Copying files
alone was no longer practical or adequate, as we now needed to save the
complete "operating system environment", not just the files.

During device driver development, mistakes would often "wipe out" all the
data on the drive.  Rather than spend four or more hours reinstalling a
UNIX or XENIX kernel for each attempt, we would just recopy the whole image
of hard drive from Magneto-Optical, (MO), in about 15 minutes.  Also going
from UNIX to Netware to OS/2 took only the same 15 minutes.  We chose MO
because we had the MO's provided to us with lots of media from the
manufacturers at no charge, and, at the time, tape capacities were
relatively small and tape was somewhat unreliable.  We archived the
developer's hard drives to MO and took them off-site.  All the important
files from years before were included in these images.  We were uniquely
able to use MO's to contain images because all the hard drives we used were
SCSI, which were error free.  This would not have been possible for anyone
else--using image to MO to back up or restore the standard hard drives at
the time which were ST-506 and ESDI.  These drives had errors that must be
mapped out by the OS before restoring files as well as the fact that each
drive type "looked" different to the OS, (heads tracks, cylinders, etc.).
SCSI drives all looked the same, logical sectors 1-xxx,xxx, and logically
error free, making image the "only way to fly".

Our data archiving was not perfect however.  Because we had a limited
number of MO cartridges, the developer's hard drives and servers were
recopied onto the same MO's over and over again.  Since we didn't often
"freeze" copies of these drives, if an engineer decided to delete some
files, those files were gone forever, once the MO was recopied over.  In
one instance, we had actually written a complete backup package for a major
OEM which was deleted when it was determined that we didn't want to publish
it because we didn't want to compete with our (then) backup software
partners.  By the time this was discovered, it was too late.  The lesson
learned?  Use backup media that is inexpensive enough to save many "points
in time", archivals.

Since we began developing Snapback in 1994, (an early version with limited
features was originally included as a standard utility in our SCSI
package), we have been making complete image copies of our server to tape
and saving it on at least a monthly basis--a "forever" archive.  This not
only gives us backup, but a complete development history as well, including
all the deleted unpurged files, and even the purged files in some cases.
This is extremely important to document development for intellectual
property reasons--ask your legal department about this.  A larger
percentage of almost all of today's companies' assets are contained on
those "backup" tapes.

For true "archival" when we want to delete data from the server
permanently, we copy the directory names and any pertinent notes and
archive history to a file so that we will know on which tapes to look for
these files in the future.  Since the data is on multiple tapes, there is
little danger of loss from defective media.  In five years we will start
recopying the images from the existing tapes to either the same type media
or a new type to prevent loss from the magnetic domain changes that
naturally take place over time.

Since all our archiving is done as an image to the tape, we feel we have
the best of all worlds when it comes to retrieving the data--anytime for
the next hundred years.  We can mount the tape for individual file
retrieval or restore the whole image to another hard drive if necessary.

True image backup is the safest type of backup possible today, or into the
future because it is the only complete backup.  There is no chance that
critical data will be left out, and there is little chance for end-user or
programmer error.  Because it is complete, in the absolute worst case, if
you had a image server tape and wanted some data from it.  And you lost the
restore software and somehow all our customers had lost every copy of
Snapback, you could still restore it.  All you would have to do is hire a
relatively bright programmer for under a week, or contract it to someone
like Ontrack, http://www.ontrack.com  (who incidentally already has our
software).  Have him look at the data near the beginning of the tape to
find the first sector of the hard drive.  Next, copy that sector and every
sector thereafter, one to one, to the respective hard drive sector until
you run out of tape.  Since it is an image, there is no possibility for a
software error due to improperly interpreting the tape's format.  Once it
is on the hard drive, you should be able to boot and go, assuming you can
still find an (Intel?) machine to run it on!.  If the server's files had
been encrypted however, you would only have the same rights to the files as
the original owner.

I would appreciate your comments to help us properly advise others and to
create the storage solutions that YOU want.

Alan L. Welsh, President Columbia Data Products, Inc.
http://www.cdp.com       cdp@cdp.com
1070b Rainer Dr. - POBOX 142584   Altamonte Springs, FLorida 32714
(407) 869-6700 = Voice - (407) 862-4725 = fax

---------

Date: Wed, 5 Feb 1997 14:10:11 -0600
From: Joe Doupnik <JRD@CC.USU.EDU>
Subject: Re: Archiving vs backup issues

	<long interesting parts omitted>
>	Have him look at the data near the beginning of the tape to
>find the first sector of the hard drive.  Next, copy that sector and every
>sector thereafter, one to one, to the respective hard drive sector until
>you run out of tape.  Since it is an image, there is no possibility for a
>software error due to improperly interpreting the tape's format.
<snip>
-----------
	Except:
	when disk sectors are bad and are remapped by the drive
	when the disk controller is mistakenly setup for DOS mapping
of large drives and things get overwritten
	when the new target disk is just a little shy of sectors, since
manufactures are fuzzy about what's a 2GB drive these days
	And we do ask the standard question about tape restoration: what
happens if a bad spot appears on the tape? Will the program carry on
regardless or will it fail like nearly all supposedly big time
backup/restore programs.
	Joe D.

---------

Date: Wed, 5 Feb 1997 21:50:27 +0100
From: "Arthur B." <arthur-b@ZEELANDNET.NL>
Subject: Re: data archiving

<snip>
>Possible archiving solutions
> - 4mm DAT (our current backup solution)
>   The tapes will last for 5-10 years if properly stored, but
>   will I still be able to find a functional drive? Will I be
>   able to find appropriate s/w to restore the files?

1 to 3 years is a safer margin. You can't tell what will
happen with the tape. How it will be stored, etc.

> - DLT (using this now to back up our imaging data)
>   Tapes are very durable, but I have the same h/w worries as
>   for DAT, to a lesser degree. My s/w concerns remain.

You're right.
Important backups that are older then one year should be restored
back on the server and backed up again using the current
backup device.

> - CD-R, i.e. writeable CDROM
>   (My current top choice.)
>   PROS- The media is supposedly good for ~100 years, and the
>   file format is well standardized; i.e. no backup s/w
>   comparability issues. Also, CDs are very entrenched in the
>   marketplace, so I expect that compatible hardware will be
>   around for a while.
>   CONS- Low capacity. 15GB of 40K TIFs would use a lot of CDs,
>   which takes a lot of time and effort to burn. I don't want
>   to become a disc jockey!

No. CD-R don't have a protective coating like CD-ROMs.
10 years is what I've heard *if* handled properly.
If size is a problem think about CD's that are 12" in diameter.

> - DVD
>   Sounds very promising, but it will take a year or two for
>   the standards to settle out for before anyone takes this
>   seriously.

If it's not standarized don't use it unless there's no other option.

>We currently plan to put the small stuff on CD-R; and keep the
>multi-gig stuff on-line for a year. Hopefully by that time DVDs
>(or the next Big Thing) will be ready for prime time.
>
>Also, I think that (at least) every 5 years, the archiving plan
>should be updated, and there is a good chance that it would make
>sense to re-archive the data. I just had the pleasure of copying
>the last byte of data from our stack of Bernouilli cartridges,
>destroying the media, and disposing of the last Bernouilli drive.
>You kind of had to be there, but "there was much rejoicing".

Which is why I re-archive once a year.

>Well, I've rambled enough. Comments, anyone?

What about hard-copy backup in OCR font?
Think about it. Paper holds about 50 years on average and
when handled well for hundreds of years. When damaged...
some companies can even read burned paper.
OCR software will be available for years to come.
Probarly will become better year by year.
When you need to 'restore' you simply look up the set
of needed papers. Scan them in with you current software
and hardware. Clean it up a little. And jam it into your then
current favoured word processor whatever and hand it out.
Or just give a copy of the paper(s)...

>PS -For more about DVD and especially CDs, try www.cd-info.com
>and www.kodak.com/daiHome/techInfo/permanence.shtml

---------

Date: Thu, 6 Feb 1997 18:28:57 -0600
From: "Alan L. Welsh" <snapback@IX.NETCOM.COM>
Subject: Easy hard drive upgrades and image restore issues.

>	<long interesting parts omitted>
>>Have him look at the data near the beginning of the tape to
>>find the first sector of the hard drive.  Next, copy that sector and every
>>sector thereafter, one to one, to the respective hard drive sector until
>>you run out of tape.  Since it is an image, there is no possibility for a
>>software error due to improperly interpreting the tape's format.
>>
>>...Storage solutions from CDP...the way they were MEANT to be!
>>Alan L. Welsh, President Columbia Data Products, Inc.
>
>Except:
>when disk sectors are bad and are remapped by the drive

DEFECT ISSUES ON TODAY'S DRIVES--
As per the SCSI spec, all SCSI I and SCSI II drives are logically defect
free for the life of the drive and therefore any reads or writes done to
any logical block on the drive should appear to be error free to the device
driver and the OS.  If errors are discovered, all drive manufacturers will
repair or replace the drive as defective.  As a matter of course, the media
is not really error free but only appears to be, as the drive is presented
to the device drivers after the drive itself has mapped the errors out
before it goes out the SCSI cable.  That is why, from the lowest level,
there is only logical sector addressing, (LBA), and no accessible heads,
tracks, and cylinders on a SCSI drive.

Device driver writers are the only guys who actually get to talk to the
devices at this lowest possible level, LBA.  Only the drive manufacturer
using special equipment can access any bad sectors that have been mapped
out by the drive.  To prevent new "grown defects" from causing everyone to
return their drives as defective, drives dynamically remap sectors that are
starting to have difficulty reading before they actually go bad.  The remap
works like this on the drive side: the good data from the failing sector is
copied to a spare sector on the drive and the internal index that converts
heads, tracks and cylinders to LBA is then updated.  That is why your
netware error logs never contain (or better not) "error reading from
drive", as the drive takes care of this before the device driver or Netware
sees it.   Although addressing the drive's sectors is different for IDE's,
this same type of defect mapping has been the standard for the last four
years at all of the IDE drive manufacturers.

BACKING UP EXISTING HARD DRIVE ERRORS --
But let's suppose you do have a drive with defects.  In this case, all the
"bad" sectors would be either copied to the tape or skipped and the words
"BAD BLOCK" would be put in that sector's place, (if that option was set,
after xx numbers  of retries).   At this point you would have as complete a
copy as is physically possible to get, without drive disassembly and
repair.  You can then copy this image onto a logically defect free drive,
as large or larger, and it will function identically as the original.  The
"bad" sectors will still be mapped out by the OS when they shouldn't be.
But you shouldn't be concerned about this unless you have allowed thousands
of grown defects to arise before replacing the drive (ugh!).  No harm or
abnormal operation will occur.

DRIVE ERROR SUMMARY
Image backup was correctly dropped years ago as a viable backup method
because it was impossible to find an error free drive to restore it to.  It
is a faster, easier, and safer method however,  given error free hard
drives, reliable tape drives, and software that will allow fast, easy, and
secure individual file, volume, partition, or whole drive restoration.

>Except: when the disk controller is mistakenly setup for DOS mapping
>of large drives and things get overwritten

Since we back up in LBA mode and don't use DOS's INT-13, our software won't
ever see or know about the differences in the SCSI controller's setup.  I
believe that you are referring to the switch on SCSI controllers that sets
"extended BIOS support for drives over 1 GIG" in the BIOS on Adaptec (and
others) card.  Our solution is: backup and restore normally.  If you have
trouble booting the new drive, change this switch on your controller.  The
switches' only function is to determine the drive's parameterization during
and after booting while DOS apps are using INT-13, and not used by our
software.  This problem will only be seen when changing an under 1 gig SCSI
drive to an over 1 gig SCSI drive.  Again, if it won't boot, just change
this switch in the SCSI controller's CMOS.

>Except: when the new target disk is just a little shy of sectors, since
>manufactures are fuzzy about what's a 2GB drive these days

Our software will insure that the prior drive image will fit or we will not
proceed to do the restore.  At that point, you can get a larger drive, or
if it is "a little shy", put a smaller DOS partition on the drive manually,
and then restore the volumes from tape.  This same method of manually
creating the DOS partition and then restoring the volumes from tape will
also be required for restoring to a different type of IDE drive, moving
from IDE to SCSI, SCSI to IDE, or combining multiple physical hard drives
into one drive as well.  Since most server drives are SCSI, this is not as
much of an issue.  In most cases, users will be going from a SCSI drive to
a larger one which can be completed with our software (which includes a
resize utility) in a few hours instead of days.

>Except:   And we do ask the standard question about tape restoration:
>what happens if a bad spot appears on the tape? Will the program carry on
>regardless or will it fail like nearly all supposedly big time backup/
>restore programs.
>	Joe D.

I wish we could solve this one in software for you, but with today's tape
drives we can't, and neither can the "big-time" houses.  What happens is
this:  We're reading from the tape and encounter an error.  Our choice is
to retry, which we do, or skip to the next block on the tape.
Unfortunately, today's tape drives will not allow us to skip to the next
block on the tape and so it is totally lost in space and the restore is
aborted.  To prevent someone from inadvertently using this incomplete
restored hard drive, our software zero's out sector zero so it won't boot
up and look like it is working until asking for that unrestored data.  The
upside of this problem is this: DLT's are so reliable that you could take a
2" chunk out of the middle of the tape and not lose a byte; most of today's
helical scan or DLT drives are VERY reliable and just don't fail on backup
or restore very often.  If they do, you should be using the redundancy rule
of having multiple backup tapes.  The excuse "the dog ate my homework"
didn't work in school and won't with anyone else if a tape is mangled--have
multiple backup tapes, even if the dates are slightly different.

CONCLUSION --
Creating a simple image backup software package is the easy part, but of
very little value by itself without a great deal of additional software.
Doing it right by adding all the necessary protections to protect the user
from some of the potential problems and to overcome the limitations you
have mentioned has truly been a rather daunting task for the last few
years.  I believe we have accomplished this and that the benefits to the
user have been worth it.  The difficult part for us now is to explain to
users that these natural limitations have been eliminated and they should
expect that restoring a server is a simple one step process--as long as ALL
the data is on the tape.  This newsgroup as well as the press can play a
big part in changing the user's backup paradigm; making it safer, easier,
and cheaper to protect from data loss.

------------------------------

Date: Fri, 14 Feb 1997 09:20:36 -0800
From: Floyd Maxwell <floyd@DIRECT.CA>
Subject: Re: Stac Replica 3

>Just wanting to get some opinions/input on the above program.  We are
>looking at this as a disaster recovery method and would appreciate any
>input either positive or negative.

See the page 79 of the 27 Jan 1997 issue of PC Week (Vol 14, No. 4).
Very positive full page article on Stac's Replica.

---------

Date: Fri, 14 Feb 1997 11:41:53 -0600
From: "Ron C. Neely" <ccron@CC.MISSOURI.EDU>
Subject: Stac Replica 3 -Reply

We've been using it for several months and are quite satisfied with it.
During our inital testing of the product on a Compaq Proliant 1500,  we
simulated a disk failure by wiping out the RAID configuration in CMOS.
Using the two recovery diskettes we had made for this machine, it wasn't
even necessary to FDISK  the drive array to recover the server.
It restored the DOS partition, the hidden Compaq system partition, the
Netware partition and NDS all in less than 15 minutes with a DLT2000XT
tape drive.  All this with one diskette swap and a couple of presses of
the ENTER key!  And it's very easy to install and configure.  It sure
makes me sleep better.

Stac offers a free 45 day eval on their Web site.

---------

Date: Sat, 15 Feb 1997 09:10:01 -0500
From: Tony Sonderby <asonderb@diac.com>
To: "'netw4-l@ecnet.net'" <netw4-l@ecnet.net>
Subject: RE: Stac Replica....

We evaluated Replica for some time.  I'm not sure if we were missing
patches, or if our server wasn't quite right, but we had a problem akin
to a memory leak.  Cache buffers would decrease over a three day period,
then we'd have to reboot.  If we mounted a tape volume to retrieve a file,
not only would it take forever to retrieve the file, but this would also
reduce availabe cache buffers by about 7%, even after dismounting the
tape volume.

We paid someone (CNE) to come in and double check our server configuration,
and had STAC tell us things looked ok on their side.  After tossing up our
hands, and shrugging shoulders, we demo'd Arcserve.  What a difference!

I'm sure there are Replica installations out there that work like a breeze,
but that wasn't *my* experience.

------------------------------

Date: Wed, 19 Feb 1997 00:03:32 +0100
From: "Arthur B." <arthur-b@ZEELANDNET.NL>
Subject: Re: Loss Evaluation

>I would like to know if any one has any articles or information on what
>a decent network, 1-2 GB on the network, that is used for most - all
>record keeping and ordering, selling and so for. I would like to know an
>average or some documentation on what it costs a company to totally lose
>the data. The reason for this is one company is balking at the idea of a
>automated backup system, saying that they have never had a problem before.
>The back up now takes 3 hours and is done only about once a month and the
>last time that I tried to restore a file it took 5 tapes to find a good
>one.

Maybe you could save yourself some calculations by saying to the boss:

You: "I'm giving everybody a free.. p a i d ... day off"
Boss: "Are you mad! You know what that costs?!!!"
You: "No I don't but the fileserver just crashed and I'm a bit busy right
      now but I'll come back for those figures later."

Some other easy way to make a point is to take 'last year profit'
(say $2.2 M) and divide by 'number of production days' (say 220 days).
The you can say that by estimate each day off-line cost about $10,000.
Not including the cost for determining which data was lost (IOW which
bills may never get paid) and the cost to retype them in and recalculate.
(say on average company wide that will take 16 work hours per lost day.
16 x $100 x 1 = $1600 for each lost day).

IOW the cost involved for administrating and billing one company work day
and keep track of it all multiplied by 1.5 or so because people will have
to search for the data instead of finding it on their desk.

Which doesn't include the cost involved with possible overtime and cost
due to beeing payed later or beeing billed for paying too late yourself.
Along with the cost involved of running low on stock. Etc. etc..

It really depends on how accurate the company keeps paper records.
If the company keeps paper records on a weekly bases afterwards while
input forms are tossed away after they are typed into the computer the
cost will be much higher. Maybe even too high to let the company survive.
etc. etc.

The chance of such an event?
Well, are you insured against fire? Why? It doesn't happen that often...

The cost for a good backup solution? ..... (fill in the blanks).

------------------------------

Date: Sat, 22 Feb 1997 11:04:49 -0400
From: "John P. Bruni" <BRUNIJ@ISD.UPMC.EDU>
Subject: One of those "Gotcha's"

Over the years I have gotten much from this list but never been able to give
anything back.  Not that this post will "balance the books", but it may help
someone avoid the same situation.

I am using Palindrome's, (now Seagate), Network Archivist for DOS 3.1 as
my backup system software with a Palindrome Fast 2000C DAT drive,
(OEM-Archive drive)

This system has served me just fine over the years until last Friday when the
tape drive failed.  Now, I always thought I was prepared for such a situation
as I had a HP Jetstore DAT drive sitting on a shelf as a backup.

After patting myself on the back for being so well prepared, I connected the
new drive and discovered that none of the tapes made on the Palindrome drive
could be read on the Jetstore.  The software recognized the drive just fine
and will read/write tapes formatted in the JetStore but sees the Palindrome
produced tapes as being unformatted.  Calls to Seagate and HP pointed the
finger at each other.

I have no one to blame for the situation I am in but myself.  I should have
simulated a hardware failure long ago where I would have discovered this
before it became critical.

Another thing I learned...I will fight harder for a new backup system in the
next budget go round.

------------------------------