Rocksolid Light

Welcome to Rocksolid Light

register   nodelist   faq  


rocksolid / rocksolid.nodes / Re: i2pn2 spool size

SubjectAuthor
* Re: i2pn2 spool sizeRetro Guy
+- Re: i2pn2 spool sizeanon
`* Re: i2pn2 spool sizeNeodome Admin
 `* Re: i2pn2 spool sizeRetro Guy
  `* Re: i2pn2 spool sizeNeodome Admin
   `* Re: i2pn2 spool sizeRetro Guy
    `- Re: i2pn2 spool sizeNeodome Admin

Subject: Re: i2pn2 spool size
From: retro.guy@retrobbs.rocksolidbbs.com.remove-lpj-this (Retro Guy)
Newsgroups: rocksolid.nodes
Organization: RetroBBS
Date: Mon, 25 Nov 2019 09:37 UTC
On Sun, 24 Nov 2019 13:02:39 +0000
"anon" <anon@anon.com> wrote:


this change is something you have to do when creating the partition,
right ? so you need to create a new one, and then move the spool to
it ?

Exactly.

can you list what you have been doing, exactly ?

i2pn2 has been running out of inodes, while bytes availble is high
(many GBs) This is due to the small size of news articles just wasting
inodes.

I created a new partition not much bigger than the original, but with
more available inodes. The new partition 20% larger than the old, but
has 500% more inodes available.

If you format the partition as 'mkfs.ext4 -t news', you will end up
with an ext2 filesystem that is very appropriate for a tradspool inn2
setup.

I've done this two times, and both times I found it necessary to repair
the install with tdx-util to repair tradindexd inode mismatch errors.
This has happened each time. Once repaired 'tdx-util -F', it seems to
be working well.

I'm only working with i2pn2 right now, as it's not a peer to anyone by
my own systems. I don't want to break connectivity to anyone else until
I (think) I know what I'm doing. While the downtime kept articles from
retroBBS temporarily, they should be there now (i2pn2 is between
rocksolid2 and retroBBS).

I'll list any other issues here, or final results whether good or bad
when I'm done testing.

Retro Guy



Subject: Re: i2pn2 spool size
From: anon@anon.com (anon)
Newsgroups: rocksolid.nodes
Organization: def5
Date: Tue, 26 Nov 2019 21:25 UTC

thanks for the walkthrough. that might come in handy one day.

Posted on def4


Subject: Re: i2pn2 spool size
From: admin@neodome.net (Neodome Admin)
Newsgroups: rocksolid.nodes
Organization: Neodome
Date: Thu, 5 Dec 2019 04:12 UTC
Retro Guy <retro.guy@retrobbs.rocksolidbbs.com.remove-lpj-this> wrote:
On Sun, 24 Nov 2019 13:02:39 +0000
"anon" <anon@anon.com> wrote:


this change is something you have to do when creating the partition,
right ? so you need to create a new one, and then move the spool to
it ?

Exactly.

can you list what you have been doing, exactly ?

i2pn2 has been running out of inodes, while bytes availble is high
(many GBs) This is due to the small size of news articles just wasting
inodes.

I created a new partition not much bigger than the original, but with
more available inodes. The new partition 20% larger than the old, but
has 500% more inodes available.

Perhaps ZFS might be better solution? No issues with inodes on ZFS.

--
Neodome


Subject: Re: i2pn2 spool size
From: retro.guy@retrobbs.rocksolidbbs.com.remove-zn5-this (Retro Guy)
Newsgroups: rocksolid.nodes
Organization: RetroBBS
Date: Thu, 5 Dec 2019 04:46 UTC
  To: Neodome Admin
Neodome Admin wrote:

Retro Guy <retro.guy@retrobbs.rocksolidbbs.com.remove-lpj-this> wrote:
On Sun, 24 Nov 2019 13:02:39 +0000
"anon" <anon@anon.com> wrote:


this change is something you have to do when creating the partition,
right ? so you need to create a new one, and then move the spool to
it ?

Exactly.

can you list what you have been doing, exactly ?

i2pn2 has been running out of inodes, while bytes availble is high
(many GBs) This is due to the small size of news articles just wasting
inodes.

I created a new partition not much bigger than the original, but with
more available inodes. The new partition 20% larger than the old, but
has 500% more inodes available.

Perhaps ZFS might be better solution? No issues with inodes on ZFS.

I've used ZFS before and really like the ability to add partitions, etc. to the filesystem. That's a good idea. I'm planning to expand a backup machine local to me soon and may use ZFS.

If you don't mind me asking, what fs do you prefer for news spools yourself, are you using ZFS?

Retro Guy

--
Posted on RetroBBS



Subject: Re: i2pn2 spool size
From: admin@neodome.net (Neodome Admin)
Newsgroups: rocksolid.nodes
Organization: Neodome
Date: Thu, 5 Dec 2019 08:20 UTC
Retro Guy <retro.guy@retrobbs.rocksolidbbs.com.remove-zn5-this> wrote:
  To: Neodome Admin
Neodome Admin wrote:

Retro Guy <retro.guy@retrobbs.rocksolidbbs.com.remove-lpj-this> wrote:
On Sun, 24 Nov 2019 13:02:39 +0000
"anon" <anon@anon.com> wrote:


this change is something you have to do when creating the partition,
right ? so you need to create a new one, and then move the spool to
it ?

Exactly.

can you list what you have been doing, exactly ?

i2pn2 has been running out of inodes, while bytes availble is high
(many GBs) This is due to the small size of news articles just wasting
inodes.

I created a new partition not much bigger than the original, but with
more available inodes. The new partition 20% larger than the old, but
has 500% more inodes available.

Perhaps ZFS might be better solution? No issues with inodes on ZFS.

I've used ZFS before and really like the ability to add partitions, etc.
to the filesystem. That's a good idea. I'm planning to expand a backup
machine local to me soon and may use ZFS.

If you don't mind me asking, what fs do you prefer for news spools
yourself, are you using ZFS?

Retro Guy


Yes, I’m running FreeBSD on ZFS. However, filesystem choice currently does
not affect my setup that much because I’m using CNFS buffers for spool
storage. However, I know some admins are using tradspool on ZFS to archive
text newsgroups and they have no issues even with very large quantities of
articles.

I always used CNFS buffers because previously I had limited space and
didn’t want to run out of it. When I moved server to bigger machine I kept
same setup because I was planning to play with some large databases and
wasn’t sure how much storage I would need for that. I think I have about 50
GBs for text groups, which gives me several months of retention.

--
Neodome


Subject: Re: i2pn2 spool size
From: Retro Guy@rslight.i2p (Retro Guy)
Newsgroups: rocksolid.nodes
Organization: Rocksolid Light
Date: Fri, 6 Dec 2019 01:58 UTC
Neodome Admin wrote:

Perhaps ZFS might be better solution? No issues with inodes on ZFS.

I've used ZFS before and really like the ability to add partitions, etc.
to the filesystem. That's a good idea. I'm planning to expand a backup
machine local to me soon and may use ZFS.

If you don't mind me asking, what fs do you prefer for news spools
yourself, are you using ZFS?

Retro Guy


Yes, I’m running FreeBSD on ZFS. However, filesystem choice currently does
not affect my setup that much because I’m using CNFS buffers for spool
storage. However, I know some admins are using tradspool on ZFS to archive
text newsgroups and they have no issues even with very large quantities of
articles.

I always used CNFS buffers because previously I had limited space and
didn’t want to run out of it. When I moved server to bigger machine I kept
same setup because I was planning to play with some large databases and
wasn’t sure how much storage I would need for that. I think I have about 50
GBs for text groups, which gives me several months of retention.

When I started using inn, I didn't realize there were options other than tradspool, so that's how it ran. I've been reading a bit about CNFS just to learn and it looks good.

Checking the change to increasing inodes, I see now that 5 months of text groups is using 23% of disk space, but only 12% of inodes. So it looks like space will now be the limitation, not inodes.

Still, ZFS makes it so easy to add space, I still need to consider it.

Retro Guy
--
Posted on Rocksolid Light


Subject: Re: i2pn2 spool size
From: admin@neodome.net (Neodome Admin)
Newsgroups: rocksolid.nodes
Organization: Neodome
Date: Sat, 7 Dec 2019 15:10 UTC
Retro Guy <Retro Guy@rslight.i2p> wrote:
Neodome Admin wrote:

Perhaps ZFS might be better solution? No issues with inodes on ZFS.

I've used ZFS before and really like the ability to add partitions, etc.
to the filesystem. That's a good idea. I'm planning to expand a backup
machine local to me soon and may use ZFS.

If you don't mind me asking, what fs do you prefer for news spools
yourself, are you using ZFS?

Retro Guy


Yes, I’m running FreeBSD on ZFS. However, filesystem choice currently does
not affect my setup that much because I’m using CNFS buffers for spool
storage. However, I know some admins are using tradspool on ZFS to archive
text newsgroups and they have no issues even with very large quantities of
articles.

I always used CNFS buffers because previously I had limited space and
didn’t want to run out of it. When I moved server to bigger machine I kept
same setup because I was planning to play with some large databases and
wasn’t sure how much storage I would need for that. I think I have about 50
GBs for text groups, which gives me several months of retention.

When I started using inn, I didn't realize there were options other than
tradspool, so that's how it ran. I've been reading a bit about CNFS just
to learn and it looks good.

CNFS buffers are good if you don’t care about having less control over
expiration of articles. For example, if you have 50 GB buffer and all space
was used, new articles will just start overwriting oldest ones, thus making
sure that you won’t accidentally run out of space. It’s also harder to work
with individual articles in the CNFS buffer. If you have some kind of
script directly accessing articles in the spool you’ll have to use internal
INN program to print each article to stdout, or you’ll have to process them
as they come in.

Checking the change to increasing inodes, I see now that 5 months of text
groups is using 23% of disk space, but only 12% of inodes. So it looks
like space will now be the limitation, not inodes.

Still, ZFS makes it so easy to add space, I still need to consider it.

I would, too. Seems that ZFS was ready for production for a while.

--
Neodome


1
rocksolid light 0.6.5e
clearnet i2p tor