Ubuntu Installation :: Restore The Files That Deja Dup Created?
Dec 1, 2010
I used Deja dup to back up my Ubuntu eventually I reinstalled ubuntu and went to restore from the files that Deja Dup created but when it said it was done there was no change. I can go into the difftar.gz files and see my folders and files and even tried extracting them from there but when I open them all the information isn't there.
I haven't restored Ubuntu till now so I don't know if I'm doing it wrong or not but its REALLY important that I get some of these files. am I missing a step, or what?
so i've just installed Burg loader in a PC which dual-booted Windows 7 and Ubuntu 11.04 with no problem using grub2 in the past. But after i installed burg it just boots up straight into the Windows OS and Ubuntu is nowhere to be found. Is there a way to restore grub2 preferably without having to lose any files?
I'm trying to be a good user and back up my hardrive onto my external using deja dup and it his me with this error message:
Code:
OK, any ideas at all what I'm doing wrong? I used deja dup to back up my last computer without any issues but I seem to be making some kind of mistake this time around.
I installed F9 for a friend. She wasn't getting any sound. I ran aplay -l, but got no sound card listed. I ran lspci -v, but got no sound card listed. However,the output of lspci -v said the computer had a particular motherboard. I Googled for it and was told it had an onboard sound card. That led me to the package called realtek-linux-audiopack-4.06a, which I installed. It included an installation script, which I ran.
The script didn't work to compile various files it was supposed to, but it did work to delete various files from my friend's system. Here are the bits of the script that removed files:
echo "Remove old sound driver" if [ -d /lib/modules/$KERNEL_VER/kernel/sound ]; then rm -rf /lib/modules/$KERNEL_VER/kernel/sound/pci > /dev/null 2>&1
[code]....
In the result, the failure of the compilation didn't matter, because lspci -v lied. She didn't have the motherboard shown, but a different one without an onboard sound card. Of the files deleted by the script.
I was able to reinstall libasound.so.2 and libasound.so.2.0.0, but I haven't yet tried to reinstall the other ones deleted by the script. Now, I want her to buy a sound card, but I'm afraid it won't work unless all the deleted files are reinstalled. I'm looking for guidance as to how I can reinstall the files deleted by the bits of the script I set out above, without completely reinstalling Fedora.
I was working on my Ubuntu lab machine and unconsciously deleted the project files I was working on. I have been working on the project since last 10 days now. Is there a way to restore the files? I do not have sudo access. I was working in my home directory which is served by a common file system (serving all the lab machines).
I can't figure out where to see when files and folders where created on the system. All I can see in Gnome and the terminal is the modification time. I also want to see the last access time.
Everytime I open a drive, an icon shows up on the desktop. i hate that! i want a clean desktop free of icons so i can put pretty widgets and all that other junk . how do i stop this from happening?
I ran Scalpel to retrieve some files from an old HDD, what I didn't realise is how much larger the output from Scalpel is than the original disk space.
So what happened is at about 5% of files carved (pass 2/2) my disk space was full and the program aborted.
Then when I rebooted Ubuntu I can't get in unless using Recovery Mode.
I think this is because there is no disk space left. The error I get is to do with Gnome Power Manager not being configured, I never had this error before.
So I went into CLI prompt and performed 'find -maxdepth 2 | grep *.mpg' and I can see a tonne of mpg directores called ./~/mpg-1-6, ./~/mpg-1-7, etc, and they all have lots of files.
Now I have no way of deleting these, I tried using 'sudo rmdir */*mpg' and it cannot find the directory.
When I go to my home folder and 'sudo ls -al' these aren't listed.
The only way I can see them is using the 'find' command.
Can I chain the find command with a delete command somehow?
I am trying to set the default files created by www-data to 774 (umask of 003).
I go to
Code: /etc/apache2/envvars
and have set these parameters. NOTE: The only thing I actually changed was adding the umask 003 at the end.
Code: # envvars - default environment variables for apache2ctl # Since there is no sane way to get the parsed apache2 config in scripts, some # settings are defined via environment variables and then used in apache2ctl, # /etc/init.d/apache2, /etc/logrotate.d/apache2, etc.
I'm a newbie despite using Ubuntu most of the time for nearly 3 years. There are some files which are created automatically in one of my ntfs partition. The files are khq, khp, kht, an autorun inf file and others. They seem to have been created while I was using ubuntu and even though I delete them,they appear again later. I have googled and have found few information that the files are malware. I will like to know if there is a known issue and solution. This is the first time i'm posting a thread.I hope i have post it at the right place and if not,
I created a little bash script for renaming files from a folderEvery time i hv to put that bash script file (rename.sh) in folder Is there any ways i will call (rename.sh) from terminal without moving rename.sh into any folder ?ne More Question : Whenever i run any .sh file automatically one .sh~ file created it is my programing mistake or is it exists ?
Is there a simple command to copy files that have been created within the past 2 hours?I've been looking through the man pages for unisonrsyncfindcpand I can't find anything I'm looking for.All I need is a simple command.Code:Copy folder a to b if created < 2 hours.
Currently have access to a VPS where we are running a small game server on ubuntu - the problem is that it is a multi-user environment, so when one person restarts the server process, all files it creates are owned by that users name and group. I have created a group called 'game' and added both users to it, but I need to know how to make all files in the game server's directory to be r/w/x for the group 'game'. Currently, I have a script that chowns and chmods all files recursively on startup, but I'd prefer not having to do this.
Using Ubuntu 10.10 (installed via mythbuntu) I'm unable to read or see files/directories created under Ubuntu. I think it started happening after a reboot to Windows. Some of the directories created under Ubuntu have disappeared completely and some of them produce the following error: /media/storage/videos/Kids Videos$ ls ls: cannot access Justin Bieber: Input/output error ls: cannot access Octonauts: Input/output error rest of directory is seen fine...
Same on some files: ls -l ls: cannot access Dirk Gently.mp4: Input/output error ls: cannot access Dirk Gently.nfo: Input/output error ls: cannot access Dirk Gently.srt: Input/output error ls: cannot access Dirk Gently.tbn: Input/output error ls: cannot access Human Planet: Input/output error ls: cannot access Russell Howard's Good News: Input/output error ls: cannot access The Planets: Input/output error ls: cannot access Lost Land of the Tiger: Input/output error total 300160 .....
Just to make it worse I copied more data onto the disk from windows so may have lost some it completely. It there anyway I can repair this? When trying to check under Windows it says it can't. Some of the missing files can be reloaded but others can't. Ran chkdsk /f under Windows XP. Some files have reappeared, but there has been a lot of unrecoverable files lost. Conclusion: Ubuntu 10.10 is badly broken for writing to NTFS. As I would like to share between Windows & Ubuntu using the external disk, I'm not sure what to do at this stage.
I'm dipping my toes into some bash scripting and was wondering if there was a way to delete a file not based on how old it is, but rather how many other files are currently in the folder... or something to that effect....
What I'm doing is creating a script to back up a folder nightly. I'd like to keep a maximum of 3 backups. However in case the script for some reason fails to run one night (computer turned off possibly) I don't want to set the condition for deletion to be the date.
I know that if I run:
Code: find /path/to/files* -mtime +3 -exec rm {} ; that it will delete everything older than three days. -atime and -ctime don't seem to be what I"m looking for... is there another command I can use to achieve what I"m trying to?
I have hundreds of MTS and AVI files since 2000 and would like to rename them in the following manner based on the date created: DD-MMM-YYYY HH.MM.SS_X; where X begins at 1 and increments by 1 if there are dublicate date/time stamped videos.
Ex: 19-Nov-2002 08.12.30.avi, 19-Nov-2002 08:13:30_1 and 19-Nov-2002 08:13:30_2
Someone previously wrote the following script for me, and it works great for photos. It uses EXIV2 to get the image date created info. I have tried to understand the script, but am struggling. The video files I have can use the date modified since I have not modified them since I filmed them.
#!/usr/bin/env python import os import stat import pyexiv2 import time directory = '/home/david/Desktop/test' [Code].....
I guess in most cases when extracting a tar achive ,we will get a directory with the same name as the archive file but different suffix. but in some unlucky case, as I met today, after extract a tar bar I find lots of files spread in the working directory, which is really nuisance.so what I want to learn from you is that how can I move thoes newly created files ? I know it should be some "find plus rm" fancy approch there, but I don't know exactly how.
I created a directory somewhere with permissions rwxrwxr-x so that other users in my group can create files and directories in it.
I do need to be able to delete the contents in this "public" directory, but it seems that while I am able to remove any files in this directory I cannot remove and subdirectories under it.
Is there a way to remove such subdirectories owned by others under a directory owned by me?
With the find command it is easy to find files that have been modified or accessed within a given period. When a file is created, the acesss time is the same as the modify time. But as soon it is accessed (read), the access time changes, but the modify time does not. I need to find files that been accessed at all, ie. files which have access time newer than modify time. How do I do that?
I've created a script to FTP some files from a RHEL box to a EMC NAS and the script works because if I run it script manually it runs fine and transfers the files to the NAS but when I schedule it in the crontab the script runs but it doesn't transfer anything and I pipe the output of the cron job and I see message about 'passive mode refused' and lib: not a plain file, sys: not a plain file, etc...I did a vi ~/.netrc which contains the a single line of "machine xxx.xxx.xxx.xxx login ftpuser password xxx"
My script looks like: /usr/bin/ftp -i xxx.xxx.xxx.xxx <<EOF cd ftpdir
Kernel 2.6.21.5, GNU (Slackware 12.0). Bash 3.1.17.
I want to search an entire subtree of /, in the file system, for all files, with extension html, created on the hard disk. In addition, these have to be the last five created. I think I could split the problem into two parts: (a) Forget about the last condition. Then this is a job for the find command. (b) Sort the output of find using the date as the key, then use 'head' to print the desired output. But even two such simple steps are enough to justify the writing of a shell script. And here lies my weakness.
My script writing knowledge is rudimentary. What's the final purpose? Well, I lately saved four or five LQ pages onto disk containing information I consider valuable to me. But I don't exactly remember where on the disk. Then: either the problem posed is really of a very simple nature or it is not, in the latter case a script being mandatory. One of the algorithm drawbacks (the one described above) is that find may be running a great deal of time. My machine resources (RAM and CPU speed are low) are scarce and there possible are a large number of HTML files on the disk.
files and directories are NOT being created with consistent ownership and permissions: when created via Samba, they are created with user/group = nobody, and when created via the OS, they are being created with user/group = root.This causes problems with our automated tools that access the server (via Samba) and do a variety of file system operations (which need root privileges).How do I cofigure Samba so files/directories are created with user/group = root?
I have just recently setup our network with an Ubuntu server which is being accessed by one Ubuntu workstation with nfs, & one XP workstation with Samba. Following instructions from the web, I set permissions like this:
sudo chmod 777 /media/Data sudo chown -R curley:curley /media/Data The problem is that files created in Samba are locked from nfs & vice/versa. I am assuming I need to add the nfs network share to the Curley group or use something more global than curley to make this work?
Samba is working ok from Ubuntu when browsing through Nautilus, but there is no access to shares when doing things like uploading or downloading files to the web (the "save file" window only shows local folders). So I guess I wouldn't mind just using Samba if I could map a folder in Samba to solve that problem.
I am migrating (many) .tex files created in WinEdt under Windows XP to Kile under openSUSE. Kile uses by default UTF-8 encoding, whereas my files are apparently ANSI... I infer this last from my failed attempt to manipulate WinEdt into encoding in UTF-8 for me, as follows: I had inserted the comment
% !Mode:: "TeX:UTF-8"
into my .tex file (following Re: UTF-8). When I had WinEdt open this file it objected in no uncertain terms: ``The file is not in UTF-8 format: Loading as ANSI.''
Summarizing, WinEdt provides me files encoded as ANSI. Yet in Kile -> Settings -> Configure Kile -> Open/Save there is no option to select ANSI.
Kile will open these files in read only mode, and I can then save them with a new name, and proceed -- but it would be more satisfying to know the ``proper'' approach to this.
How to find and list files and directories present the current directory which were created in, say, years 2005, 2006, and 2009 and then move them to some other location, for example, /backup. Yes, I need to list them and move simultaneously. We can use:
Code:
find . -mtime n {};
but that n is troublesome for me to figure out files/directories created in years 2005, 2006, and 2009, for instance. Is there any way to match exactly by Year Value rather than calulating the "n" (days * 24 Hours)?
i am facing a problem regarding permissions. how can i set 775 permission for all newly created files and folders. when i give chmod -R 775 /data permission is getting to all files and folders. but when i create a folder i wont get that permission. i want this 755 permission should be permanent for all old and newly create files
I have a directory cookie_tmp which is owned by some:fella. Session cookies are being created under this directory as How can I set the directory so that files are created and owned by some:fella ?
I am backing up parts of my computer with DD, and i was wondering if there was a quick way to split the files created into 4.4GB sized files that will fit onto a DVD. Anyone have any idea of how to do this?
I am trying to create an RPM package. However when the RPM package installs, it need to skip some files that might have been created by the user after the last installation and use of the program. Is there a way to build RPM package that just skips the user created content in the installation dir.
For example: lets say my RPM package creates the following dir and creates files required by my application, say .app files. /ppm/config/
However the user may also create a few .xml files in the same dir. How will I package my program that will not delete the .xml files from the above dir and will just create the application files (.app files).