Bill Cunningham | 17 Aug 21:52 2014
Picon

extended filesystems

    I would like to start experiemnting with the ext filesystems. I might
like one day to develop something. :) What files contain the ext4
filesystem. That's what I'm running right now. I like ext 2/3/4 all of them.

    My fedora partition is only 20 GB in size. I don't need huge filesystem
support which is a feature of ext4 I believe. Which feature can I remove to
remove this feature? I know it would be done with tune2fs -O ^ and then the
feature name.

    Why would I want to do this. to learn and I don't think I need it. Too
much overhead.

Bill
Roland Olbricht | 17 Aug 19:28 2014
Picon
Picon

What uses these 50 GB?

Hello everybody,

first of all thank you the development of Ext2/3/4. It works like a 
charm and makes it possible to base applications on it.

However, now I have the first time where I need more information to 
understand the behaviour of a ext4 installation on a 480 GB harddisk.
It holds a database with a size of 355 GB, as said by

"du -m":

...
355263  /opt/ssd

However, "df" says:

Filesystem      1K-blocks       Used Available Use% Mounted on
...
/dev/sdc        468346644  409888536  35015532  93% /opt/ssd

I do understand why there is a gap between "Used" plus "Available" and 
"1K-blocks", but I don't understand why "Used" is so much bigger (54 GB 
difference) than what "du -m" indicates.

I can rule out any issues with inodes; "df -i" indicates that less than 
one percent is used.

I tried to understand more details by using "debugfs". I thought I get a 
full list of used blocks with:

(Continue reading)

Ivan Baldo | 7 Jun 01:57 2014
Picon
Picon

Recommended minimal amount of free space to keep?

     Hello.
     So, LVM is cool, having different partitions for different stuff is 
cool, and of course Ext4 is cool and *reliable*.
     So, we create some logical partitions and put ext4 on them, 
reserving LVM space for growing those partitions or even making new ones 
later.
     The thing is, I would like to keep every filesystem as small as it 
can be, but without degrading the performance too much.
     I guess that having a filesystem 99% full will create too much 
fragmentation and many other issues, but having them only 30% full seems 
like a waste.
     Currently I try to keep them at 70% full utilization but I have not 
based that on anything just guess.
     So, what % hysteresis do you recommend? For example, when they get 
70% full then grow them so that they get 50% full? Other values?
     Thanks for the hints!
     Good day everyone.

--

-- 
Ivan Baldo - ibaldo <at> adinet.com.uy - http://ibaldo.codigolibre.net/
 From Montevideo, Uruguay, at the south of South America.
Freelance programmer and GNU/Linux system administrator, hire me!
Alternatives: ibaldo <at> codigolibre.net - http://go.to/ibaldo
Keith Keller | 31 May 20:56 2014
Picon
Picon

[long] major problems on fs; e2fsck running out of memory

Hello ext3 list,

I am having an odd issue with one of my filesystems, and I am hoping
someone here can help out.  Yes, I do have backups.  :)  But as is often
the case, it's nice to avoid restoring from backup if possible.  If
there is a more appropriate place for this question please let me know.

After quite a while between reboots, I saw a report on the console that
the filesystem was inconsistent and could not be automatically repaired.
After some aborted tests (which I did not log, unfortunately, I was able
to get this far:

# head fsck.out
fsck from util-linux-ng 2.17.2
/dev/mapper/vg1--sdb-lv_vz contains a file system with errors, check forced.

# time passes, progress bar gets to 51.8% with no problems, then

Pass 1: Checking inodes, blocks, and sizes
Inode 266338321 has imagic flag set.  Clear? yes

Inode 266338321 has a extra size (34120) which is invalid
Fix? yes

Inode 266338321 has compression flag set on filesystem without
compression support.  Clear? yes

# some 150k messages later

Inode 266349409, i_blocks is 94855766560840, should be 0.  Fix? yes
(Continue reading)

Maurice Volaski | 29 May 00:39 2006

[DRBD-user] [Q] What would cause fsck running on a drbd device to just stop?

drbd-0.7.19 under kernel 2.6.17-rc4 is running on a primary node 
standalone. There are 8 resources in the same group. fsck.ext3 -fv is 
being run simultaneously on all of them. Each of the drbd devices are 
running on an lv, which all belong to a single pv. The actual "disk" 
is a hardware RAID connected via SCSI (i.e., the mpt driver).

Five of the fsck finished their tasks successfully and reported no 
problems. The remaining three got "stuck". There was no activity 
either on the physical RAID itself or listed in top. They were just 
listed as "D," uninterruptible sleep. Two of the fscks were at the 
end, giving the final summary information--no problems---for their 
respective filesystems and were stuck at that point. The last one was 
stuck in the first pass. Attempts to kill them failed; even kill -9 
and attempting to shutdown were ignored. I rebooted manually and ran 
the three stuck ones again without a hitch.
--

-- 

Maurice Volaski, mvolaski <at> aecom.yu.edu
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University
_______________________________________________
drbd-user mailing list
drbd-user <at> lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Maurice Volaski | 29 May 00:39 2006

[DRBD-user] [Q] What would cause fsck running on a drbd device to just stop?

drbd-0.7.19 under kernel 2.6.17-rc4 is running on a primary node 
standalone. There are 8 resources in the same group. fsck.ext3 -fv is 
being run simultaneously on all of them. Each of the drbd devices are 
running on an lv, which all belong to a single pv. The actual "disk" 
is a hardware RAID connected via SCSI (i.e., the mpt driver).

Five of the fsck finished their tasks successfully and reported no 
problems. The remaining three got "stuck". There was no activity 
either on the physical RAID itself or listed in top. They were just 
listed as "D," uninterruptible sleep. Two of the fscks were at the 
end, giving the final summary information--no problems---for their 
respective filesystems and were stuck at that point. The last one was 
stuck in the first pass. Attempts to kill them failed; even kill -9 
and attempting to shutdown were ignored. I rebooted manually and ran 
the three stuck ones again without a hitch.
--

-- 

Maurice Volaski, mvolaski <at> aecom.yu.edu
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University
_______________________________________________
drbd-user mailing list
drbd-user <at> lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Benno Schulenberg | 26 May 21:20 2014
Picon

more PO files available at the TP


Hi,

At the translationproject there are four more languages available
than are included in the 1.42.10 tarball: Danish, Esperanto, Malay,
and Ukrainian.  Please include these in your next release.

Attached patch adds the missing language codes to the po/LINGUAS
file.  The easiest way to fetch the missing files (and the latest
updates) is to run:

  rsync -Lrtvz  translationproject.org::tp/latest/e2fsprogs/  po

Regards,

Benno

--

-- 
http://www.fastmail.fm - Or how I learned to stop worrying and
                          love email again

_______________________________________________
Ext3-users mailing list
Ext3-users <at> redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
Martin T | 10 May 20:42 2014
Picon

location of file-system information on ext4

Hi,

I zero-filled first 10MiB of my SSD(dd if=/dev/zero of=/dev/sda bs=10M
count=1). As expected, this wiped my primary GPD header and first
partition. Before the wipe, GPT was following:

Disk /dev/sda: 250069680 sectors, 119.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 2EFD285D-F8E6-4262-B380-232E866AF15C
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 250069646
Partitions will be aligned on 1-sector boundaries
Total free space is 16 sectors (8.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34        58593784   27.9 GiB    EF00  root part
   2        58593785       234375035   83.8 GiB    0700  home part
   3       234375036       250069630   7.5 GiB     8200  swap part

However, to my surprise, the file(1) and dumpe2fs(8) utilities were
still able to show information regarding file system:

root <at> T60:~# file -s /dev/sda1
/dev/sda1: sticky Linux rev 1.0 ext4 filesystem data,
UUID=88e70e1b-4c45-4e7e-931f-9e97d7257ee8 (needs journal recovery)
(extents) (large files) (huge files)
root <at> T60:~# dumpe2fs /dev/sda1
dumpe2fs 1.42.5 (29-Jul-2012)
Filesystem volume name:   <none>
Last mounted on:          /
(Continue reading)

Patrik Horník | 18 Apr 18:56 2014
Picon

Many orphaned inodes after resize2fs

Hello,

yesterday I experienced following problem with my ext3 filesystem:

- I had ext3 filesystem of the size of a few TB with journal. I correctly unmounted it and it was marked clean.

- I then ran fsck.etx3 -f on it and it did not find any problem.

- After increasing size of its LVM volume by 1.5 TB I resized the filesystem by resize2fs lvm_volume and it finished without problem.

- But fsck.ext3 -f immediately after that showed "Inodes that were part of a corrupted orphan linked list found." and many thousands of "Inode XXX was part of the orphaned inode list." I did not accepted fix. According to debugfs all the inodes I check from these reported orphaned inodes (I checked only some from beginning of list of errors) have size 0.

- When I mount the fs read only the data I was able to check seem OK. (But I am unable to check everything.)

- I created LVM snapshot and repaired the fs on it with fsck.ext3. After that there we no files in lost+found. Does it mean that all that orphaned inodes have size 0? Or when the fsck does not create files in lost+found?

- I am checking the data against various backups but I will not be able to check everything and some less important data dont have backup. So I would like to know in what state the fs is and what are best next steps.

- Right now I am planning to use current LVM snapshot as test run and discard it after data check. Original fs is in the state just after resize2fs, fsck was run on it after that but I did not accepted any fix and cancelled the check. I then plan to create backup snapshot, fsck original fs / LVM volume, check once again against backups and go with it. But this will not tell me status of all my data and the fs and if it is secure to use it. Another problem is all operations take long hours.

- I have also some technical specific questions. Orphan inode is valid inode not found in any directory, right? What exactly is CORRUPTED orphan linked list? What can cause such problem? Is it known problem? How can orphaned inodes and corrupted orphan linked list can be created by resize2fs or why was it not detected by fsck.ext3 before that? Can it be serious and can it be symptom of some data loss? Can fixing it by fsck.ext3 corrupt other data which are OK now, when I mount the fs read-only?

- The platform used was latest stable Debian with kernel linux-image-3.2.0-4-amd64 version 3.2.46-1+deb7u1 and e2fsprogs 1.42.5-1.1. After the incident I started using linux-image-3.13-1-amd64 version 3.13.7-1 (from the point of snapshot's creation and running fsck for real on snapshot) and thinking about going to e2fsprogs 1.42.9 from sources.

Thank you very much.

Patrik
_______________________________________________
Ext3-users mailing list
Ext3-users <at> redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
Martin T | 6 Mar 21:46 2014
Picon

questions regarding file-system optimization for sortware-RAID array

Hi,

I created a RAID1 array of two physical HDD's with chunk size of 64KiB under Debian "wheezy" using mdadm. As a next step, I would like to create an ext3(or ext4) file-system to this RAID1 array using mke2fs utility. According to RAID-related tutorials, I should create the file-system like this:

# mkfs.ext3 -v -L myarray -m 0.5 -b 4096 -E stride=16,stripe-width=32 /dev/md0


Questions:

1) According to manual of mke2fs, value of the "stride" has to be the RAID chunk size in clusters. As I use chunk size of 64KiB, then I have to use "stride" value of 16(16*4096=65536). Why is it important for file-system to know the size of chunk used in RAID array? I know it improves the I/O performance, but why is this so?

2) If the "stride" size in my case is 16, then the "stripe_width=" is 32 because there are two drives in the array which contain the actual data. Manual page of the mke2fs explain this option as "This allows the block allocator to prevent read-modify-write of the parity in a RAID stripe if possible when the data is written.". How to understand this? What is this "read-modify-write" behavior? Could somebody explain this with an example?


regards,
Martin
_______________________________________________
Ext3-users mailing list
Ext3-users <at> redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
VYSHAKH KRISHNAN CH | 29 Jan 05:41 2014
Picon

Invitation to connect on LinkedIn

 
 
 
 
 
From VYSHAKH KRISHNAN CH
 
Software Engineer at Ericsson
Bengaluru Area, India
 
 
 
 
 
 
 

I'd like to add you to my professional network on LinkedIn.

- VYSHAKH

 
 
 
 
 
 
 
You are receiving Invitation to Connect emails. Unsubscribe
© 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, USA
 
_______________________________________________
Ext3-users mailing list
Ext3-users <at> redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users

Gmane