I have ubuntu lucid lynx server running on an old eMachine as a fileserver. It has a big (2TB) slow (10-20 MB/s) external drive (ntfs-3g), and a small (20G) fast (60 MB/s) internal drive (ext3).
Using smbfs and vsftpd and sshd, all my data lives on the big drive and I'll see dramatic speed differences between the drives.
I'm wondering, is there a way to things up so that the external drive uses a cache on the internal drive to speed things up?
For example ... ftpd takes data from a client, and instead of slowly writing it directly to the external drive, it is written first (and quickly) to the internal drive, then, later, it is written to the external drive?
Let's say that software is written that treats a network drive as a swap drive.
Further, let's say that this network drive is not a hard drive on the server but it is a chunk of memory treated as a filesystem in other words a ramdisk on the server.
Given the bottleneck of gigabit ethernet that is used for the link, can anyone predict the likely practical bandwidth of this swap drive in MBytes/s, and crucially the latency in milliseconds?
The reason for this imaginary setup is outside the scope of the linuxquestions forums, please answer on the likely performance only.
I'm trying to set up an SSD as a cache to my external HDD (which is where my installation of Debian testing/stretch is installed). My installation is using LVM 2. I'm trying to have the SSD cache the entire external HDD, and not just one of the partitions (such as the root or home partitions).
Here are the relevant outputs.
uname -a: (Yes, I'm using the Debian stable kernel with Debian testing.)
I had the idea to cache writes to my nfs filesystem on my local hard drive.
It seems CacheFS exists to cache reads, but not writes to nfs.
I would imagine that if I could cache writes to my nfs on a local drive I could have a fast system, but keep all my files where I want them on my network.
I'm thinking about this because I am planning on buying a SSD, and I would imagine if I could set things up this way the system could be lightening fast while keeping things on the network. Currently if I copy a large file (hundreds of MB) it is quite slow, with an SSD and caching, I would imagine the copy could be very fast.
I'm testing OpenSUSE 11.3 on a server and I'd like to disable the write cache on all of my drives. In Ubuntu Server I was able to accomplish this with hdparm by adding the appropriate settings to /etc/hdparm.conf
As far as I can find the only thing that OpenSUSE offers is /etc/sysconfig/ide which allows you to force particular DMA modes. I could just put the hdparm commands in /etc/init.d/boot.local but I'd prefer to do it the right way if there is a right way to do this in OpenSUSE.
I don't understand this error nor do I know how to solve the issue that is causing the error. Anyone care to comment?
Quote:
Error: Caching enabled but no local cache of //var/cache/yum/updates-newkey/filelists.sqlite.bz2 from updates-newkey
I know JohnVV. "Install a supported version of Fedora, like Fedora 11". This is on a box that has all 11 releases of Fedora installed. It's a toy and I like to play around with it.
I was laughing about klackenfus's post with the ancient RH install, and then work has me dig up an old server that has been out of use for some time. It has some proprietary binaries installed that intentionally tries to hide files to prevent copying (and we are no longer paying for support or have install binaries), so a clean install is not preferable.
Basically it has been out of commission for so long, that the apt-get upgrade DL is larger than the /var partition (apt caches to /var/cache/apt/archives).
I can upgrade the bigger packages manually until I get under the threshold, but then I learn nothing new. So I'm curious if I can redirect the cache of apt to a specified folder either on the command line or via a config setting?
I installed squid cache on my ubuntu server 10.10 and it is work fine but i want to know how to make it cache all files like .exe .mp3 .avi ....etc. and the other thing i want to know is how to make my client take the files from the cache in the full speed. since am using mikrotik system to use pppoe for clients and i match it with my ubuntu squid
My wireless seems to be fast for a good 30secs then bang takes good while to load the next page almost as if it's disconnecting and then reconnecting/scanning reconnecting. Why cant it stay connected. I have WAP PSK security here is my network setting please let me know if I should change any of them:(side not is there a way to fix this problem occuring so frequently it says on the wiki that it should only occur once in a whilce https:[url].....
i was looking for a way to stop my menus taking a few seconds to load my icons when i first open them and found a few guides suggesting using the gtk-upate-icon-cache command, but with the any colour you like icon theme i'm using (stored in my home folder .icons directory) i kept getting a "gtk-update-icon-cache: The generated cache was invalid." fault i used the inbuilt facility in the acyl script to copy the icons to the usr/share/icons directory and tried the command again, this time using sudo gtk-update-icon-cache --force --ignore-theme-index /usr/share/icons/ACYL_Icon_Theme_0.8.1/ but i still get the same error. i tried with several of the custom icon themes i've installed and only 1 of the first 7 or 8 i tried successfully created the cache.
I have a computer on my LAN that I'm using as a file server for my photography work. What I'm wanting to do is allow my business partner be able to access the file server from his home over the internet. I'd also like to create a share folder on each of our computers so that we can each access and modify so we can sync our work easily without being in the same office.What would be the easiest way to do this and how exactly do you access another person's computer over the internet?
i am taking another stab at this. The last time i attempted it, it seemed like everyone had a different way to do it, but nobody could give me an answer on how to do it...
I currently have a Domain Controller Running sme server and a domain controller, using ldap as a backend. I have two file servers runing ubuntu 10.04. My overall goal is to have it so when i create a username on the domain controller, it is then automatically copied over to the fileservers. This way everyone will have their own username and password to access the fileservers and ill be able to track what people do on the fileservers.
The next necessity is for me to be able to apply permissions to the folders on the fileserver based on the users that are created on the domain controller.
I`m looking for solution for fileserver which could be accessed by VPN from Windows. Is it possible to configure something like this ? If yes, what kind of software should I use ? VSFTPD, Samba?
Installing 11.2 from KDE LiveCD on an IBM ThinkCenter with 3.2Gb CPU and 1Gb RAM. Ubuntu 9.04 on first two partitions. I go through the configuration, click to 'install': Install display bars remain blank. After 2-3 minutes, black screen with scroll of attempted installation pieces and the error message: "Respawning too fast. Disabled for 5 min." Freeze.
Other posts mention problem with init. But this is happening with the install so not able to address that. No apparent md5chksum for LiveCDs. No mention of this problem in installation help guide. Does anyone know how to deal with this? If you need more info, I will provide. Though it seems this is not an unusual problem when booting an installed system, there's no mention of it happening during installation.
I am very new to Ubuntu (any Linux) evironment. And it has been a long long time since I have dealt with seting up servers. I have done alot of searching but haven't found exactly what I THINK I am looking for. I want to create a file server (I have created my Ubuntu server cd) and add it to my home network (all windows pcs). I need to be able to access it when away from home ( I work away from home mostly). I will be accessing this with a Windows 7 laptop.
What do I need installed on the server? Samba for the file server part. What else for the remote access? I also would rather not access the data via FTP. I would like it to come up as a drive in my Windows Explorer. If not, I remember back in college (20 years ago) when I could open a little window (XWindow maybe) on the other server.
An issue I see that might not be an issue. I have a static IP from my ISP. It comes into my home via their modem. I attach to the modem with a router. All my laptops connect to it wireless and this server will be wired. How do I hit the server and not one of the laptops with only having the one IP address? Each of these plus my external harddrive and printer have their own internal IP address' that I have assigned.
I recently upgraded my ubuntu samba fileserver to 10.04 along with increasing the size of my RAID 1 /home directory.I am using the same smb.conf file setup I have used on intrepid ibis setup and hardy heron setup before that.On my new setup, I can see the ubuntu server on my windows 7 machines, but I can't see the shares and can't access them.In checking the logs (/var/log/samba), one log continues to look for a printer share from one Windows machine that I have not set up on samba yet.
I have found a few people who have reported similar problems online, even a few who have filed bugs, but then they say "my computer started working suddenly. I don't know what happened." so they closed the bug. or "my computer started working after I rebooted my machine." I have rebooted all machines on the network. That doesn't fix it.
I've prepared a Samba fileserver at work without much too problems and I've prepared a batch file to mount it as z: letter on windows machine at startup.As a sad result the share gets filled with many viruses and became a vehicle of infection.
folder1 ----> folder2 and many other files and folders
folder1 has a condivision access read and write for everyone so I get no problems with passwords for all those who have access but i use ntfs security to do it read only (viruses act like if a pendrive is connected and mainly put infected files just in the "root" of it, in my case in folder 1) and then give everyone full control in folder2. I've been trying to understand how to do this but I'm quite new to linux and smb.conf really scared me. I've tried samba graphical tool which was a lot easier but I'm not able to achieve this kind of result: no need of user password for users to mount the share and no write possibilities in folder 1 and full control in folder 2.
I am somewhat of a newbie at *nix. I've asked some questions about Debian in the past, but I decided to just go ahead and start simple, with Ubuntu, then moving up once I've got the hang of things. So, I want to use Ubuntu 9.10 as a fileserver for my network, which consists of 3 Windows PCs and 1 Mac. I have a few ideas on where to go, Samba being my first package install, but I'm not too sure where to go from there. Could someone help me out? I love to RTFM, so if you'd point me kindly in those directions, I'll be glad to jump right on that too.
Okay, I've been using openSUSE on and off since version 10.0. I have enjoyed it quite a bit. It still hasn't taken over as my main OS, but I have installed 11.3 on my netbook and it is the primary there.
Here's the issue though. I have recently acquired an older HP fileserver. It has a 1ghz PIII, 756mb ram, three SCSI hard drives, as well as a tape drive back up. All in working condition. The motherboard is even a dual socket (multi-processor) motherboard, and I can get a matching processor for around $20 - $30.
I would like to turn this into a server of some sort that I can use at the house to share photos/music/documents/movies etc. Now, I am assuming that if I was using linux completely for all the computers in the house it would be easier to network all this. However, this is not a reality. My wife's computer uses XP Home and this is what she wants. My desktop dual boots Win 7 Ultimate and openSUSE 11.3, and the netbook dual boots XP Home and openSUSE 11.3.
How would you go about setting up the HP as a server. I plan on adding a couple of 500GB hard drives for the storage and possibly purchasing the second processor. I have a linksys router, a zyxel wireless router (set up as a wireless switch) and a 5 port workgroup switch. So attaching everything should fairly simple.
I would like to make a small NFS server for a small LAN. Normally, I would build a dedicated cheap and cheerful linux box to do this. However, I was wondering if all of this could be done more easily using a commodity standalone device like e.g. "NetGear ReadyNAS Duo NAS". I presume devices like this run their own proprietary OSes, and I would prefer instead an opensource OS based device. I do like the look of these devices as they seem simple and small.
So my real question: What would linuxers advise for me given that I want a minimalistic NFS fileserver? I can make my own dedicated linux desktop machine. However, is a standalone device similar to the above, but running something like FreeNAS, also an option?
I'm new here and I didn't really use linux before. My problem is, there was an admin at the company. I rename it's username and add a new password to it. The Linux fileserver uses Samba. I have an XP PC, which was the admin's mentioned above, and it can't access to the fileserver since I rename the admin's username and password. Can it be the problem, that I did it in graphic mode, not in console? I did it with smbpasswd too, but it isn't working. Do I need to write something in the smb.conf file or something? If I miss to write down something important please ask, but don't swear at me.
After a near miss with my 1.5TB, RAID5 file server, I have decided that I need to backup my data to an external hardrive periodically.I have been looking at rsync but the question I have is: Do I format the external hard drive in EXT3 (the sameas my fileserver) or NTFS?All my main machines are Windoze, but the file server is Ubuntu with a samba share.If my server ever went belly up, I would like to be able to access my data from the external hard drive. I guess if it's in EXT3 then windows would be clueless... I would either need to fix the server pronto or access it with a live CD or something.What would I lose if I used NTFS instead of EXT3? I think I would lose permissions and possibly ownerhsip information - are there any other issues?
I'm hoping, now that I've recovered my partition tables, how to rebuild my LVM volume group. The trouble is that one of the volumes lost its partition table, and after rebuilding the table, LVM can no longer identify the drive. I'm trying to rebuild the 'fileserver' volume group.
pvscan produces the following: Couldn't find device with uuid 'jsZAMq-LSa1-87Zb-WoGs-oi6v-u1As-h7YZMl'. PV /dev/sdc5 VG dev lvm2 [232.64 GiB / 0 free] PV unknown device VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] PV /dev/sda1 VG fileserver lvm2 [1.36 TiB / 556.00 MiB free] Total: 3 [2.96 TiB] / in use: 3 [2.96 TiB] / in no VG: 0 [0 ]
I am currently in process of remodeling my home file server and would like some advice. The server has two internal hard drives that are rather small (10-15G) and I've now ordered a larger 2 TB drive which for the time being will have to run as an external drive through USB 1 (going to be rather slow).
I'm probably going to put the OS and swap on one of the internal drives but I was wondering if there was a good option for increasing the system's performance by making the second internal drive act as a kind of buffer for the 2TB usb drive. I'd like both the advantages of the large size of the USB drive and the fast read/write speed of the internal drive. Would it be possible to put it together so all reads and writes to the fileserver would first go to the fast internal drive (or even better, to internal drive and USB drive at once, although I suppose RAID is not an option with USB attached storage), and would then be put to the large drive. It would also be nice if the most used files from the large USB drive would be cached on the internal drive for fast read speed. I understood that ZFS would help me accomplish something the like but as I understand, it's not that easily available on Linux.
The current plan is to make the large drive a simple XFS drive and build a small daemon on the server that would simply move all new files from the internal drive to the USB one, once they are not used but it would be nice to have a more low level solution.
New fun from M$, we have started to test Win-7 on a few machines, and while it worked flawlessly in XP, Vista and Win-7 beta, logging on to the share (AD) from a Win-7 RC doesn't work.
My all production PC r running under ADC windows2008 server. Recently I implement a file server in CentOS 5. Now I want to integrate Samba (File sharing) using Active Directory so that all access permission to file server comes from AD's permission.
I'd like to set up a fileserver for myself and a few trusted individuals. I'm computer savvy and I use various linux servers frequently for work, but this is my first time trying to setup my own. Is it possible to have a Samba server setup so it is both secure and facing the Internet? Two questions:
Will opening Samba ports make my default Ubuntu server particularly vulnerable to penetration? More than having an SSH server running? Does Samba/ can Samba be configured to encrypt traffic or is it sent plainly? If so, does Windows and Mac support this secure communication?
If not, what would you suggest? I'd like to achieve something like a network drive and at a difficulty level that my parents could use this if they really wanted to. I will be storing things like financial information and tax returns, but no weapons-grade secrets.