This is the biggest 'stumper' I have ever dealt with on a Mac since 1986. I am a pretty talented computer guy. I see that other folks have this problem (or variations of it) and I have tried most if not all the supposed 'cures' and nothing has worked. Again, my OTHER Mac can connect to all my home LAN SMB shares just. A history of the Mac floppy from the 400K drive in the Mac 128K through the manual-inject 1.4M SuperDrives used in the late 1990s. Software bundles: What came with the Mac 128K, 512K, and Plus, Andrew Conachey, Classic Mac Nostalgia, 2006.01.03. A look at the software and system versions that Apple shipped with the original Macintosh, the 512K. USS WADDELL (DDG-24) Crew Links. Add Your Name to the DDG-24 Crew Roster. HullNumber.com's mission is to provide a means for shipmates to keep in touch with one another.
- Destroyer (1986) Mac Os Update
- Destroyer (1986) Mac Os Catalina
- Destroyer (1986) Mac Os X
- Destroyer (1986) Mac Os Download
Epyx oli toimintapohjaisten tietokonepelien tuottaja 1970 – 1980 luvuilla. Yhtyö perustettiin Jim Conneleyn ja Jon Freemanin toimesta. Epyx tuotti useita tunnettuja hittipelejä pitkin 80-lukua, kunnes ajautui konkurssiin 1989 kadoten täysin vuonna 1993. Useimmille parhaiten on jäänyt mieleen Epyxin urheilupelit, joita se tuotti aikoinaan hurjan määrän.
Lista Epyxin tuottamista peleistä : Time rift mac os.
- 4×4 Off-Road Racing (1988)
- Alien Garden (1982)
- Armor Assault (1982)
- Axe of Rage (aka Barbarian II: Dungeons of Drax) (1988)
- Barbie (1984)
- Battle Bugs (1994)
- Blue Lightning (1989)
- Break Dance (1984)
- Boulder Dash Construction Kit (1986)
- California Games 2 (1990)
- Championship Wrestling (1986)
- Chip's Challenge (1989)
- Chipwits (1984)
- Crush, Crumble and Chomp! (1981)
- Crypt of the Undead (1982)
- Death Sword (aka Barbarian: The Ultimate Warrior) (1987)
- Destroyer (1986)
- Dragon's Eye (1981)
- Dragonriders of Pern (1983)
- ElectroCop (1989)
- Escape from Vulcan's Isle (1982)
- Fax (1983)
- Final Assault (1987)
- Fore!
- G.I. Joe: A Real American Hero (1985)
- The Games: Summer Edition (1988)
- The Games: Winter Edition (1988)
- Gates of Zendocon (1989)
- Gateway to Apshai (1983)
- Impossible Mission II (1988)
- Invasion Orion (1979)
- Jabbertalky (1982)
- Jumpman (1983)
- Jumpman Junior (1983)
- King Arthur's Heir (1982)
- L.A. Crackdown (1988)
- Legend of Blacksilver (1988)
- Mind-Roll (1988)
- Monster Maze (1982)
- Morloc's Tower (1979)
- The Movie Monster Game (1986)
- New World (1982)
- Nightmare, The (1982)
- Oil Barons (1983)
- Omnicron Conspiracy (1989)
- Pitstop (1983)
- Pitstop II (1984)
- PlatterMania (1982)
- Project Neptune (1989)
- Purple Saturn Day (1989)
- Puzzle Panic (1984)
- Rad Warrior (1986)
- Rescue at Rigel (1980)
- Revenge of Defender (1988)
- Ricochet (1981)
- Rogue: The Adventure Game (1983)
- Showstrike (1991)
- Silicon Warrior (1984)
- Space Station Oblivion (1987)
- Spiderbot (1988)
- Starfleet Orion (1978)
- Star Warrior (1981)
- Street Sports Basketball (1987)
- Street Sports Soccer (1988)
- Sub Battle Simulator (1987)
- Summer Games (1984)
- Summer Games II (1985)
- Super Cycle (1986)
- Sword of Fargoal (1982)
- Dunjonquest family:
- Temple of Apshai (1979)
- Hellfire Warrior (1980)
- Dunjonquest: Danger in Drindisti (1981)
- Dunjonquest: The Keys of Acheron (1981)
- Upper Reaches of Apshai (1982)
- Temple of Apshai: Curse of Ra (1982)
- Gateway to Apshai (1983)
- Dunjonquest: Morloc's Tower (1979)
- Dunjonquest: The Datestones of Ryn (1979)
- Dunjonquest: Sorcerer of Siva (1981)
- Temple of Apshai Trilogy (1985)
- Sword of Fargoal (1982)
- Tuesday Morning Quarterback (1980)
- World Games (1986)
- World Karate Championship (1986)
- Zarlor Mercenary (1990)
These notes cover useful things you can do with `dd`.
- 3Securely erase a drive
- 3.1Tinfoil hat paranoia
- 4Erase MBR
- 5Erase GPT (GUID Partition Table)
- 7Copy a drive to an image file
- 8Image a CD or DVD
Why use dd instead of cp?
In many cases you can use cp where dd is used. What dd adds is filtering. It lets you set block sizes of data, you can specify how bad blocks are handled, you can limit how much data is copied. 1337h4x mac os. dd isn't much more than a fancy cp command.
Burn Linux ISO images to a USB flash drive using Apple Mac OS X
This example burns an ISO image to a USB flash drive. In this example, the source image is ubuntu-13.10-desktop-amd64.iso. On Mac OS X the iso image must first be converted to a dmg image.
Securely erase a drive
If you are in a hurry then just drill a hole through the top of case into the platters. A professional data recovery service might be able to get some data off the platters, but it will be very expensive to do so. I don't know of any that actually offer this service.
You can use `dd` to destroy just the data without destroying the drive.
You can also use `cp` or `cat`:
Some say you should write random data to the drive (see Tinfoil hat paranoia below), but it is nearly ten times slower to use /dev/urandom than /dev/zero. It is practically impossible to use /dev/random instead of /dev/urandom.
Tinfoil hat paranoia
It takes about 15 minutes to destroy a 1GB file using GNU `shred` (default options). It takes 30 seconds to destroy the file using `dd if=/dev/zero of=somefile bs=1024 count=1M`. This is on a laptop with a 1.6 GHz dual core CPU, 2 GB RAM machine, and a Seagate Momentus ST9160823AS drive with ext3 filesystem -- in other words, nothing fancy.
Some people will tell you that simply overwriting data isn't truly secure because they heard that it's possible to read data that has been overwritten (See data remanence). Some believe that you must overwrite a bit multiple times to ensure that there is no way to recover the bit that had been stored there. There are official guidelines based on this belief. My belief is that this is a myth. The origin of this idea came from Dr. Peter Gutmann who speculated that overwritten data might be recovered through the use of Scanning Transmission Electron Microscopy. This is an interesting idea, but the key fact to point out is that this is an unsubstantiated theory -- no one has ever demonstrated recovering even a single bit of data using this technique or any other technique. No commercial forensics or data recovery firms offer any services that can recover data once it has been overwritten. Obviously the NSA is going to advertise this capability if they had it, but I believe neither they nor any advanced species of space aliens that may be visiting us have this ability. The point is that you can't hire anybody for any amount of money to recover overwritten data for you. Forget the NSA. If your data is so sensitive that you can't accept the risk that the NSA or space aliens might be able to unerase data from your drive then you don't need my advice. You might need advice from someone in a different profession.. The bottom line is that most tools that claim to 'securely' erase a drive use such extreme measures that it can take hours to erase a drive. Yet there is not a single example of anyone recovering data after it has simply been overwritten once with zeros.
If you want to erase a drive fast then use the following command (where sdXXX is the device to erase):
If you prefer to use the GNU `shred` command then you may want to put this in your ~/.bashrc or alias file to make it a little more sane:
My argument also applies to Flash memory media, which, in consumer devices, is slowly replacing magnetic media. In fact, it's probably easier to decap a flash chip and read the electron potentials trapped in the floating gates. Assuming this is possible, this would still require a laboratory and lots of money. The problem becomes even harder with MLC flash memory, which is the most common.
Destroyer (1986) Mac Os Update
caveat on Flash memory storage
Note that with Flash memory storage there is the Possibility a few random chunks of your files may be frozen in the flash device in such a way that you can't access it or delete it or even be sure if this has happened. Flash media is inherently unreliable so they all contain spare memory. As the device runs it is able to detect when parts of the memory become too unreliable to be trusted. When this happens the device will substitute some of the spare memory for the sections that are marked as bad. The data in the bad sections is copied to the spare memory. Most manufacturers doesn't mention this feature or provide a way to tell when and where this happens. The substitution is totally transparent. The bad memory doesn't go anywhere, but it's never used again. The problem is that the forgotten sections of bad memory remain on they device and they may be readable. It may be as simple as removing the flash storage chips and attaching them to a different device that can read the spare sections of memory. If the controller is built into the flash chip then it may be possible to cut off the top of the chip and read the values of the individual memory cells. .. I don't personally worry about this because while it may not be outside the realm of science fiction it is still difficult. Plus the recoverable sections of bad memory would be tiny random chunks out of the entire filesystem, though perhaps enough to cause concern for some people. .. I'm curious if any forensic professionals have ever made use of this idea to recover any useful information.
One step disk wipe tool
This is useful if you want to recycle a lot of drives: Darik's Boot and Nuke
This example burns an ISO image to a USB flash drive. In this example, the source image is ubuntu-13.10-desktop-amd64.iso. On Mac OS X the iso image must first be converted to a dmg image.
Securely erase a drive
If you are in a hurry then just drill a hole through the top of case into the platters. A professional data recovery service might be able to get some data off the platters, but it will be very expensive to do so. I don't know of any that actually offer this service.
You can use `dd` to destroy just the data without destroying the drive.
You can also use `cp` or `cat`:
Some say you should write random data to the drive (see Tinfoil hat paranoia below), but it is nearly ten times slower to use /dev/urandom than /dev/zero. It is practically impossible to use /dev/random instead of /dev/urandom.
Tinfoil hat paranoia
It takes about 15 minutes to destroy a 1GB file using GNU `shred` (default options). It takes 30 seconds to destroy the file using `dd if=/dev/zero of=somefile bs=1024 count=1M`. This is on a laptop with a 1.6 GHz dual core CPU, 2 GB RAM machine, and a Seagate Momentus ST9160823AS drive with ext3 filesystem -- in other words, nothing fancy.
Some people will tell you that simply overwriting data isn't truly secure because they heard that it's possible to read data that has been overwritten (See data remanence). Some believe that you must overwrite a bit multiple times to ensure that there is no way to recover the bit that had been stored there. There are official guidelines based on this belief. My belief is that this is a myth. The origin of this idea came from Dr. Peter Gutmann who speculated that overwritten data might be recovered through the use of Scanning Transmission Electron Microscopy. This is an interesting idea, but the key fact to point out is that this is an unsubstantiated theory -- no one has ever demonstrated recovering even a single bit of data using this technique or any other technique. No commercial forensics or data recovery firms offer any services that can recover data once it has been overwritten. Obviously the NSA is going to advertise this capability if they had it, but I believe neither they nor any advanced species of space aliens that may be visiting us have this ability. The point is that you can't hire anybody for any amount of money to recover overwritten data for you. Forget the NSA. If your data is so sensitive that you can't accept the risk that the NSA or space aliens might be able to unerase data from your drive then you don't need my advice. You might need advice from someone in a different profession.. The bottom line is that most tools that claim to 'securely' erase a drive use such extreme measures that it can take hours to erase a drive. Yet there is not a single example of anyone recovering data after it has simply been overwritten once with zeros.
If you want to erase a drive fast then use the following command (where sdXXX is the device to erase):
If you prefer to use the GNU `shred` command then you may want to put this in your ~/.bashrc or alias file to make it a little more sane:
My argument also applies to Flash memory media, which, in consumer devices, is slowly replacing magnetic media. In fact, it's probably easier to decap a flash chip and read the electron potentials trapped in the floating gates. Assuming this is possible, this would still require a laboratory and lots of money. The problem becomes even harder with MLC flash memory, which is the most common.
Destroyer (1986) Mac Os Update
caveat on Flash memory storage
Note that with Flash memory storage there is the Possibility a few random chunks of your files may be frozen in the flash device in such a way that you can't access it or delete it or even be sure if this has happened. Flash media is inherently unreliable so they all contain spare memory. As the device runs it is able to detect when parts of the memory become too unreliable to be trusted. When this happens the device will substitute some of the spare memory for the sections that are marked as bad. The data in the bad sections is copied to the spare memory. Most manufacturers doesn't mention this feature or provide a way to tell when and where this happens. The substitution is totally transparent. The bad memory doesn't go anywhere, but it's never used again. The problem is that the forgotten sections of bad memory remain on they device and they may be readable. It may be as simple as removing the flash storage chips and attaching them to a different device that can read the spare sections of memory. If the controller is built into the flash chip then it may be possible to cut off the top of the chip and read the values of the individual memory cells. .. I don't personally worry about this because while it may not be outside the realm of science fiction it is still difficult. Plus the recoverable sections of bad memory would be tiny random chunks out of the entire filesystem, though perhaps enough to cause concern for some people. .. I'm curious if any forensic professionals have ever made use of this idea to recover any useful information.
One step disk wipe tool
This is useful if you want to recycle a lot of drives: Darik's Boot and Nuke
Erase MBR
I had Linux with GRUB installed on a machine. I needed to get rid of it and put Windows on the machine. I used a Ghost recovery disk to restore Windows on it, but Ghost didn't restore the MBR. GRUB was still lurking in the Master Boot Record. On boot GRUB would try to start but would error out. Wiping out the MBR fixed the problem. This will wipe out the MBR of a disk (sdXXX in this example) but keep the partition table and disk signature:
If you also want to totally erase the entire MBR include disk signature and partition table then use the following command:
If you want to not worry about remembering the exact amount of the disk to erase and you do not care about erasing other data on the drive, then simply blast away the first megabyte or so. It doesn't matter.
disk signature of boot disk
The disk signature is an obscure topic. These are the 4 bytes in the MBR starting after the first 440 bytes (offset 0x01B8 through 0x01BB). Often you can mess with it without problems, but in certain circumstances Linux may need to see a specific disk signature on the boot disk. The most critical fact is that the disk signature of the primary BOOT disk must be unique. In days past, I did not know the significance of the disk signature so I would often zero it out along with the MBR boot code using `dd if=/dev/zero of=/dev/sdXXX bs=446 count=1`. That is not guaranteed to be harmless. It may cause problems; although, usually it is harmless. It is also bad to COPY a disk image including the MBR and then mount both copies on the same system. The system may not boot or nothing will go wrong at all!
Do not confuse the disk signature with the MBR signature. The MBR signature is always 0xAA55 starting at offset 0x01FE. It is stored little endian, so 0x01FE:0x55 and 0x01FF:0xAA.
ms-sys
The `ms-sys` command may be helpful in working with the MBR and disk signatures.
See also
- EDD
- Bios Enhanced Disk Drive Services (EDD) 3.0. This protocol determines which disk BIOS tries boot from. This uses the Disk Signature bytes. These are the 4 bytes in the MBR starting after the first 440 bytes (offset 0x01B8 through 0x01BB).
Erase GPT (GUID Partition Table)
If you see this error while using fdisk then you may want to remove all trace of GPT.
To erase the GPT you need to erase the table at both the beginning and end of the disk. You need to use blockdev to calculate the block number at the end of the drive.
Older:
Error: Unable to install GRUB
While installing Ubuntu on a disk that may have been previously used you may get this error when you get to the very final end of the installation process.
The cause of this is GPT. You must remove the partition table before you install Linux and GRUB. Do this with an Ubuntu Desktop LiveCD (Ubuntu Server CD does not have a live option for debugging.. go figure.). Remeber, the GPT is both at the beginning and end of the disk. You must remove both of them.
Note 1, you may have to zero out a larger range of blocks for the secondary GPT. This is because I am not certain of my math. Most modern disks use 4096 byte sectors internally, but may report 512 bytes to the OS. I'm not sure what size the drives use for the LBA arithmetic. I think LBA blocks are 512 bytes, but in these examples I pretend it may be 4096 bytes just to be sure. blockdev uses 512 bytes for the --getsz option, but the seek option in dd uses 4096 byte blocks, so the results of blockdev have to be converted to a seek point using 4096 byte blocks.. Note because I use 4096 bytes for LBA block sizes this may over-estimate the size of the GPT tables and so could remove more than just the GPT tables. For my purposes this is OK because I just want to ignore whatever was on the disk before and get GRUB to install properly. This is bad if you are trying to surgically remove the GPT tables while preserving all other data.
Note 2, If you are exploring GPT by using dd to dump disk information, remember to use skip instead of seek.
Note 3, While using a live CD to get a shell to do any of this, you may also need to remove device-mapper targets (a wild guess), dmsetup remove vg--vmh--root, or something like that.
Easter Egg
While searching for strings in the dd dumps of the GPT table of a drive I noticed the following string, Hah!IdontNeedEFI. A little research shows that this is the actual official the GUID of GPT.
Fill a file with bytes
Destroyer (1986) Mac Os Catalina
This creates a 10MB file filled with zeros (0):
You can use /dev/zero and `tr` to generate and fill a file with any given byte constant. This creates a 10MB file filled with ones as a bit pattern (0b11111111, 0377, 255, or 0xff).
Filling a file with bytes other than zero can be handy for use with devices such as framebuffers where you want to clear the display but set all the pixels to white instead of black. Note that this may be larger than your framebuffer. The framebuffer device should give a harmless error when you try to write beyond the end of the framebuffer.
This can also be done without using `dd`:
Copy a drive to an image file
The following will image a drive and compress the image.
The conversion options (conv) are useful when working with physical devices such as drives. The noerror option says that the copy process should continue even if there are read errors from the drive. With this option a read error will cause the rest of the current block to be skipped and a warning message printed to stderr. The sync option says that the missing data from any skipped blocks should be replaced with null bytes. This ensures that bytes in the output file are in the same offset position they would have been if there had been no skipped blocks. Setting bs to the physical block size used by the drive ensures that as little data as possible is skipped due to read errors. If the bs was set higher physical size of the block with the error then more data than necessary would be skipped. Most drives use 4096 byte blocks. CD drives use 2048 byte blocks. If you want to be very conservative you can set bs=512 to deal with older drives, or devices such as USB flash drives where the physical block size might not be defined by an industry standard.
Many guides written for copying drives also show the notrunc option, but as far as I can tell this option is irrelevant in this context. It may be that dd would stop the copy process if all remaining blocks in the device were filled with nulls, so the output image size would be smaller than the drive. Specifying notrunc might tell dd to continue copying even the null blocks so that the output image would be identical to the contents of the drive. At least, I think that is the reasoning some people use to explain why they add the notrunc option, but I have found this not to be true. This option seems to have no effect in any of the use cases I have tested.
See Forensics,_Undelete,_and_Data_Recovery for more powerful tools, such as ddrescue, to recover damaged drives.
restore from an image
Image a CD or DVD
Basically, do this:
Note that /dev/cdrom, /dev/dvd, /dev/cdrw, and /dev/scd0 are usually just sym links to /dev/sr0 or some other optical disc device.
The following shows the naive, bad way to a CD-ROM or DVD to an iso file. This works, but it will often grab a few extra null blocks which will throw off the checksum of the disc image. If you burn this image onto a new disc then the checksum of the new disc will not match the checksum of the image file.
The following will create a correct image of a CD-ROM or DVD. This ensures that the image will have exactly the same md5sum or checksum value no matter what device or operating system is used to burn the image. This is a two-step process.
You can do this in one line:
You can turn this into an alias. The alias, `cdgen`, generates an ISO image from a directory tree and dumps it to stdout. The alias, `cddump`, dumps an ISO image to stdout. The alias, `cdburn`, reads an ISO image from stdin and burns it to a disc. These assume the primary device, /dev/dvd, is the one you want (it works for CD as well as DVD).
Here are some examples of how these can be used:
Steve Litt's `rawread` script does this automatically with the added advantage this it gets the Logical Block Size as reported by the drive instead of assuming that it is 2048; although, all ISO formatted CDs and DVDs use 2048 for the Logical Block Size, so I usually just use the aliases above.
Steve Litt's `rawread` script can be used to do things like the following. Create an ISO disc dump:
check the md5sum of the physical optical disk:
Image a drive with compression
- Backup
- Restore
Destroyer (1986) Mac Os X
- Save drive geometry info because cylinder size helps determine where a partition is stored
- Help the drive image compress more by filling unallocated space with zeros. Do this before you create the backup image. Don't do this on images to be used for forensic recovery! This creates a file filled with zeros and then deletes it
Image a drive over a network with `dd` and `ssh` or `nc` (netcat)
You can use netcat or SSH to copy a disk over a network. If you are doing this on a live server you should unmount the drive or switch to single user mode or boot from a live CD. You don't have to unmount the drive. You may copy a live, mounted drive, but you should expect some corrupt files. This is certainly not the correct way to do it, but I have never had a problem When you try to mount the drive image later, it will complain that it was not cleanly unmounted or that its journal is inconsistent. It's better if the drive is not mounted or mounted read-only.
I prefer using `ssh` over `netcat` because the entire process is started from one machine in one step and all the traffic is encrypted.
This example uses 192.168.1.100 for the receiving machine's IP address. Port 2222 is used as the listening port on the receiving machine. You may substitute any free port. First, start the Netcat listener on the receiver:
Destroyer (1986) Mac Os Download
Then start the pipeline for `dd|gzip|nc` on the sender:
Show progress status statistics of `dd`
Operations with `dd` can take a long time. Unfortunately, there is no command-line option to have `dd` print progress, but you can send the `dd` process a USR1 signal to have it print the progress statistics. For example, say you started `dd` and you know its PID is 15045. Example:
Here is a fancy example this will update progress statistics every 10 seconds:
or explicitly check the pid:
This is another example that creates a 1GB random binary file in the standard Linux RAM disk, then it copies it from the RAM disk to the current working directory. Statistics are printed and updated every second to show progress.