Dan Langille | 10 Feb 01:09 2016
Gravatar

Running lots of inserts from selects on 9.4.5

I have a wee database server which regularly tries to insert 1.5 million or even 15 million new rows into a 400 million row table.  Sometimes these inserts take hours.

The actual query to produces the join is fast.  It's the insert which is slow.

INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5, DeltaSeq) 
  SELECT batch_testing.FileIndex, batch_testing.JobId, Path.PathId, Filename.FilenameId, batch_testing.LStat, batch_testing.MD5, batch_testing.DeltaSeq 
    FROM batch_testing JOIN Path     ON (batch_testing.Path = Path.Path) 
                       JOIN Filename ON (batch_testing.Name = Filename.Name);

This is part of the plan: http://img.ly/images/9374145/full  created via http://tatiyants.com/pev/#/plans

This gist contains postgresql.conf, zfs settings, slog, disk partitions.

   https://gist.github.com/dlangille/33331a8c8cc62fa13b9f

I'm tempted to move it to faster hardware, but in case I've missed something basic...

Thank you.

-- 
Dan Langille - BSDCan / PGCon




Gustav Karlsson | 8 Feb 10:45 2016
Picon
Gravatar

Primary key index suddenly became very slow

Hi,

Question:

What may cause a primary key index to suddenly become very slow? Index scan for single row taking 2-3 seconds. A manual vacuum resolved the problem.


Background:

We have a simple table ‘KONTO’ with about 600k rows. 


            Column            |            Type             |   Modifiers
------------------------------+-----------------------------+---------------
 id                           | bigint                      | not null
...

Indexes:
    "konto_pk" PRIMARY KEY, btree (id)
...


Over the weekend we experienced that lookups using the primary key index (‘konto_pk’) became very slow, in the region 2-3s for fetching a single record:

QUERY PLAN
Index Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.052..2094.549 rows=1 loops=1)
  Index Cond: (id = 2121172829)
Planning time: 0.376 ms
Execution time: 2094.585 ms


After a manual Vacuum the execution time is OK:

QUERY PLAN
Index Scan using konto_pk on konto  (cost=0.42..6.44 rows=1 width=164) (actual time=0.037..2.876 rows=1 loops=1)
  Index Cond: (id = 2121172829)
Planning time: 0.793 ms
Execution time: 2.971 ms


So things are working OK again, but we would like to know what may cause such a degradation of the index scan, to avoid this happening again? (We are using Postgresql version 9.4.4)



Regards,
Gustav
Merlin Moncure | 5 Feb 20:52 2016
Picon

Re: bad COPY performance with NOTIFY in a trigger

On Fri, Feb 5, 2016 at 9:33 AM, Filip Rembiałkowski
<filip.rembialkowski <at> gmail.com> wrote:
> patch submitted on -hackers list.
> http://www.postgresql.org/message-id/CAP_rwwn2z0gPOn8GuQ3qDVS5+HgEcG2EzEOyiJtcA=vpDEhoCg <at> mail.gmail.com
>
> results after the patch:
>
> trigger= BEGIN RETURN NULL; END
> rows=40000
>       228ms COPY test.tab FROM '/tmp/test.dat'
>       205ms COPY test.tab FROM '/tmp/test.dat'
> rows=80000
>       494ms COPY test.tab FROM '/tmp/test.dat'
>       395ms COPY test.tab FROM '/tmp/test.dat'
> rows=120000
>       678ms COPY test.tab FROM '/tmp/test.dat'
>       652ms COPY test.tab FROM '/tmp/test.dat'
> rows=160000
>       956ms COPY test.tab FROM '/tmp/test.dat'
>       822ms COPY test.tab FROM '/tmp/test.dat'
> rows=200000
>      1184ms COPY test.tab FROM '/tmp/test.dat'
>      1072ms COPY test.tab FROM '/tmp/test.dat'
> trigger= BEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL; END
> rows=40000
>       440ms COPY test.tab FROM '/tmp/test.dat'
>       406ms COPY test.tab FROM '/tmp/test.dat'
> rows=80000
>       887ms COPY test.tab FROM '/tmp/test.dat'
>       769ms COPY test.tab FROM '/tmp/test.dat'
> rows=120000
>      1346ms COPY test.tab FROM '/tmp/test.dat'
>      1171ms COPY test.tab FROM '/tmp/test.dat'
> rows=160000
>      1710ms COPY test.tab FROM '/tmp/test.dat'
>      1709ms COPY test.tab FROM '/tmp/test.dat'
> rows=200000
>      2189ms COPY test.tab FROM '/tmp/test.dat'
>      2206ms COPY test.tab FROM '/tmp/test.dat'

I'm not so sure that this is a great idea.  Generally, we tend to
discourage GUCs that control behavior at the SQL level.  Are you 100%
certain that there is no path to optimizing this case without changing
behvior?

merlin

--

-- 
Sent via pgsql-performance mailing list (pgsql-performance <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Marc Mamin | 5 Feb 12:28 2016
Picon

gin performance issue.

Postgres Version 9.3.10 (Linux)
 
Hello,
this is a large daily table that only get bulk inserts (200-400 /days) with no update.
After rebuilding the whole table, the Bitmap Index Scan on r_20160204_ix_toprid falls under 1 second (from 800)
 
Fastupdate is using the default, but autovacuum is disabled on that table which contains 30 Mio rows.
Another specificity is that the cardinality of the indexed column is very high. The average count per distinct values is only 2.7
 
I'm not sure what the problem is. Does the missing vacuum affect the gin index sanity further than not cleaning the pending list?
As I understand it, this list will be merged into the index automatically when it get full, independently from the vaccum setting.
 
Can it be an index bloating issue ?
 
and last but not least, can I reduce the problem by configuration ?
 
regards,
 
Marc Mamin
 
Filip Rembiałkowski | 4 Feb 22:12 2016
Picon
Gravatar

bad COPY performance with NOTIFY in a trigger

Hi.

A table has a trigger.
The trigger sends a NOTIFY.

Test with COPY FROM shows non-linear correlation between number of inserted rows and COPY duration.

                             Table "test.tab"
 Column  |  Type   |                       Modifiers                       
---------+---------+-------------------------------------------------------
 id      | integer | not null default nextval('test.tab_id_seq'::regclass)
 payload | text    | 
Indexes:
    "tab_pkey" PRIMARY KEY, btree (id)
Triggers:
    trg AFTER INSERT ON test.tab FOR EACH ROW EXECUTE PROCEDURE test.fun()


Test Series 1. Trigger code: 
BEGIN RETURN NULL; END 
You can see linear scaling.

rows=40000
      191ms COPY test.tab FROM '/tmp/test.dat'
      201ms COPY test.tab FROM '/tmp/test.dat'
rows=80000
      426ms COPY test.tab FROM '/tmp/test.dat'
      415ms COPY test.tab FROM '/tmp/test.dat'
rows=120000
      634ms COPY test.tab FROM '/tmp/test.dat'
      616ms COPY test.tab FROM '/tmp/test.dat'
rows=160000
      843ms COPY test.tab FROM '/tmp/test.dat'
      834ms COPY test.tab FROM '/tmp/test.dat'
rows=200000
     1101ms COPY test.tab FROM '/tmp/test.dat'
     1094ms COPY test.tab FROM '/tmp/test.dat'


Test Series 2. Trigger code:
BEGIN PERFORM pg_notify('test',NEW.id::text); RETURN NULL; 
You can see non-linear scaling.

rows=40000
     9767ms COPY test.tab FROM '/tmp/test.dat'
     8901ms COPY test.tab FROM '/tmp/test.dat'
rows=80000
    37409ms COPY test.tab FROM '/tmp/test.dat'
    38015ms COPY test.tab FROM '/tmp/test.dat'
rows=120000
    90227ms COPY test.tab FROM '/tmp/test.dat'
    87838ms COPY test.tab FROM '/tmp/test.dat'
rows=160000
   160080ms COPY test.tab FROM '/tmp/test.dat'
   159801ms COPY test.tab FROM '/tmp/test.dat'
rows=200000
   247330ms COPY test.tab FROM '/tmp/test.dat'
   251191ms COPY test.tab FROM '/tmp/test.dat'


O(N^2) ???? 









Jordi | 4 Feb 12:15 2016
Picon

Bitmap and-ing between btree and gin?

Hello all,


I've been trying to get a query use indexes and it has raised a doubt whether pgsql supports bitmap and-ing between a multi-column btree index and a gin index.

The idea is to do a full-text search on a tsvector that is indexed with gin. Then there are typical extra filters like is_active that you would put in a btree index. Instead of using OFFSET I use a > operation on the id. Finally, to make sure the results listing always appear in the same order, I do an ORDER BY the id of the row. So something like this:

CREATE INDEX idx_gin_page ON page USING gin(search_vector); CREATE INDEX idx_btree_active_iddesc ON page USING btree(is_active, id DESC); SELECT * FROM page WHERE (( (page.search_vector) <at> <at> (plainto_tsquery('pg_catalog.english', 'myquery'))) AND page.is_active = 1 AND page.id > 100) ORDER BY page.id DESC LIMIT 100;
Some options I considered:
- One big multi-column index with the btree_gin module, but that didn't work. I suppose it's because just like gin, it doesn't support sorting.
- Seperate indexes as above, but that didn't work. The planner would always choose the btree index to do the is_active=1 and id>100 filter and the sorting, and within those results do a manual filter on the tsvector, being extremely slow.

BUT: when I remove the ORDER BY statement, the query runs really fast. It uses the 2 indexes seperately and bitmap-ands them together, resulting in a fast executing query.

So my question is whether there is something wrong with my query or indexes, or does pgsql not support sorting and bitmap and-ing?


Thanks and have a nice day
Jordi

Jérôme Augé | 4 Feb 10:12 2016
Gravatar

Understanding ANALYZE memory usage with "big" tsvector columns

Hi,

I'm trying to understand how to estimate and minimize memory
consumption of ANALYZE operations on "big" tsvector columns.

Context:

Postgresql server is 9.1.19 (Ubuntu package 9.1.19-0ubuntu0.12.04).

I have a database on which I noticed that autovacuum operations could
consume up to 2 GB of resident memory (observed with top's RSS
column).

This is sometime problematic because with 3 autovacuum processes
(default value on Ubuntu), this leads to a peak usage of 6 GB, and
sends the server into swapin/swapout madness.

This typically happens during restoration of dumps or massive updates
in the database, which triggers the autovacuum processes, and slows
down the server during the execution of these operations due to the
swapin/swapout.

Up to today we addressed this behavior by either disabling autovacuum
or temporarily bumping the VM's memory limit for the duration of the
operation.

Now, I think I managed to replicate and isolate the problem.

My analysis:

I have a table with ~20k tuples, and 350 columns with type int, text
and tsvector.

I created a copy of this table and iteratively dropped some columns to
see if a specific column was the cause of this spike in memory usage.
And I came to the simple case of a table with a single tsvector column
that causes ANALYZE to consume up to 2 GB or memory.

So, this table has a single column of type tsvector, and this column
is quite big because as it is originally the concatenation of all the
other tsvector columns from the table (and this tsvector columns also
has a GIST index).

Here is the top 10 length for this column :

--8<--
# SELECT length(fulltext) FROM test ORDER BY length DESC LIMIT 10;
length
--------
87449
87449
87401
87272
87261
87259
87259
87259
87257
87257
(10 rows)
-->8--

I tried playing with "default_statistics_target" (which is set to
100): if I reduce it to 5, then the ANALYZE is almost immediate and
consumes less than ~200 MB. At 10, the process starts to consume up to
~440 MB.

I see no difference in Postgresql's planning selection between
"default_statistics_target" 1 and 100: EXPLAIN ANALYZE shows the same
plan being executed using the GIST index (for a simple "SELECT
count(ctid) FROM test WHERE fulltext  <at>  <at>  'Hello'").

So:
- Is there a way to estimate or reduce ANALYZE's peak memory usage on
this kind of tables?
- Is it "safe" to set STATISTICS = 1 on this particular "big" tsvector
columns? Or could it have an adverse effect on query plan selection?

I'm currently in the process of upgrading to Postgresql 9.5, so I'll
see if the behavior changes or not on this version.

Thanks,
Jérôme

--

-- 
Sent via pgsql-performance mailing list (pgsql-performance <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Mathieu De Zutter | 1 Feb 08:23 2016

View containing a recursive function

Hi all,

I have a recursive part in my database logic that I want to isolate and reuse as a view. I had found a blog that explained how move a function parameter into a view. The SQL is in attachment.
When I write a query based on that view with a fixed value (or values) for the (input) parameter, the planner does fine and only evaluates the function once.
However, when the value of the parameter should be deduced from something else, the planner doesn't understand that and will evaluate the function for each possible value.

Any pointers to what I'm doing wrong or on how to optimize it?

Attachment contains the queries and explain plans.

Thanks!

Kind regards,
Mathieu
CREATE OR REPLACE FUNCTION fn_covering_works(wid INTEGER)
  RETURNS TABLE(work_id INTEGER)
AS
  $$
  WITH RECURSIVE func(work_id) AS
  (
    SELECT wid
    UNION ALL
    SELECT ad.adapted_id
    FROM func f JOIN adaptation ad ON f.work_id = ad.original_id
  )
  SELECT work_id
  FROM func
  $$
LANGUAGE 'sql' ROWS 1 COST 10000;

CREATE OR REPLACE VIEW covering_works_r AS
  SELECT
    w.id                    AS work_id,
    fn_covering_works(w.id) AS covering_work_id
  FROM work w;

-- This one is fine

EXPLAIN ANALYZE
SELECT
  w.id,
  cw.covering_work_id
FROM work w
  JOIN covering_works_r cw ON cw.work_id = w.id
WHERE w.id = 4249;

  id  | covering_work_id 
------+------------------
 4249 |             4249
 4249 |           102813
 4249 |             4250
 4249 |            23551
 4249 |            68931
 4249 |            74836
 4249 |            76088
 4249 |           111423
 4249 |           112399
 4249 |           112502
 4249 |           112666
 4249 |           120640
 4249 |           126994
 4249 |           133918
 4249 |           139519
 4249 |           142989
 4249 |           149393
 4249 |           111424

"Nested Loop  (cost=0.58..33.64 rows=1 width=8) (actual time=0.334..0.424 rows=18 loops=1)"
"  ->  Index Only Scan using work_pkey on work w  (cost=0.29..4.31 rows=1 width=4) (actual
time=0.021..0.021 rows=1 loops=1)"
"        Index Cond: (id = 4249)"
"        Heap Fetches: 0"
"  ->  Index Only Scan using work_pkey on work w_1  (cost=0.29..29.31 rows=1 width=4) (actual
time=0.309..0.393 rows=18 loops=1)"
"        Index Cond: (id = 4249)"
"        Heap Fetches: 0"
"Total runtime: 0.457 ms"

-- This one is too slow, but should be as fast as the first query.
-- At first sight it seems right, but the condition w_1.id=4249 (=w.id) isn't pushed to the second index scan.

EXPLAIN ANALYZE
SELECT
  w.id,
  cw.covering_work_id
FROM work w
  JOIN covering_works_r cw ON cw.work_id = w.id
WHERE w.first_release_id = 4249;

  id  | covering_work_id 
------+------------------
 4249 |             4249
 4249 |           102813
 4249 |             4250
 4249 |            23551
 4249 |            68931
 4249 |            74836
 4249 |            76088
 4249 |           111423
 4249 |           112399
 4249 |           112502
 4249 |           112666
 4249 |           120640
 4249 |           126994
 4249 |           133918
 4249 |           139519
 4249 |           142989
 4249 |           149393
 4249 |           111424

"Nested Loop  (cost=0.58..1659529.05 rows=1 width=8) (actual time=30.075..995.889 rows=18 loops=1)"
"  Join Filter: (w.id = w_1.id)"
"  Rows Removed by Join Filter: 81228"
"  ->  Index Scan using work_first_release_idx on work w  (cost=0.29..8.31 rows=1 width=4) (actual
time=0.009..0.009 rows=1 loops=1)"
"        Index Cond: (first_release_id = 4249)"
"  ->  Index Only Scan using work_pkey on work w_1  (cost=0.29..1658030.07 rows=66252 width=4) (actual
time=0.185..981.054 rows=81246 loops=1)"
"        Heap Fetches: 0"
"Total runtime: 995.916 ms"

# select id, first_release_id from work w where id = 4249;
  id  | first_release_id 
------+------------------
 4249 |             4249


--

-- 
Sent via pgsql-performance mailing list (pgsql-performance <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hedayat Vatankhah | 30 Jan 13:30 2016
Picon
Gravatar

PostgreSQL seems to create inefficient plans in simple conditional joins

Dear all,
First of all, I should apologize if my email doesn't follow all the guidelines.
I'm trying to do that though!

If referencing to links is OK, you can find the full description of
the issue at:
http://dba.stackexchange.com/questions/127082/postgresql-seems-to-create-inefficient-plans-in-simple-conditional-joins

It contains table definitions, queries, explan/explan analyze for them, and
a description of test conditions. But I'll provide a summary of the planning
issue below.

I'm using postgresql 9.3. I've run VACCUME ANALYZE on DB and it is
not modified after that.

Consider these tables:
CREATE TABLE t1
(
  id bigint NOT NULL DEFAULT nextval('ids_seq'::regclass),
  total integer NOT NULL,
  price integer NOT NULL,
  CONSTRAINT pk_t1 PRIMARY KEY (id)
)

CREATE TABLE t2
(
  id bigint NOT NULL,
  category smallint NOT NULL,
  CONSTRAINT pk_t2 PRIMARY KEY (id),
  CONSTRAINT fk_id FOREIGN KEY (id)
      REFERENCES t1 (id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)

Personally, I expect both queries below to perform exactly the same:

SELECT
    t1.id, *
FROM
    t1
INNER JOIN
    t2 ON t1.id = t2.id
    where t1.id > -9223372036513411363;

And:

SELECT
    t1.id, *
FROM
    t1
INNER JOIN
    t2 ON t1.id = t2.id
    where t1.id > -9223372036513411363 and t2.id > -9223372036513411363;

Unfortunately, they do not. PostgreSQL creates different plans for these
queries, which results in very poor performance for the first one compared
to the second (What I'm testing against is a DB with around 350 million
rows in t1, and slightly less in t2).

EXPLAIN output:
First query: http://explain.depesz.com/s/uauk
Second query: link: http://explain.depesz.com/s/uQd

The problem with the plan for the first query is that it limits
index scan on t1 with the where condition, but doesn't do so for t2.

A similar behavior happens if you replace INNER JOIN with LEFT JOIN,
and if you use "USING (id) where id > -9223372036513411363" instead
of "ON ...".

But it is important to get the first query right. Consider that I want to create
a view on SELECT statement (without condition) to simplify creating queries on
the data. If providing a single id column in the view, a SELECT query
on the view
with such a condition on id column will result in a query similar to
the first one.
With this problem, I should provide both ID columns in the view so that queries
can add each condition on ID column for both of them. Now assume what happens
when we are joining many tables together with ID column...

Is there anything wrong with my queries or with me expecting both queries to be
the sam? Can I do anything so that PostgreSQL will behave similarly for the
first query? Or if this is fixed in newer versions?

Thanks in advance,
Hedayat

--

-- 
Sent via pgsql-performance mailing list (pgsql-performance <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

jfleming | 29 Jan 23:06 2016

jsonb_agg performance

The jsonb_agg function seems to have significantly worse performance than its json_agg counterpart:

=> explain analyze select pa.product_id, jsonb_agg(attributes) from product_attributes2 pa group by pa.product_id;
                                                              QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=1127.54..1231.62 rows=3046 width=380) (actual time=28.632..241.647 rows=3046 loops=1)
   Group Key: product_id
   ->  Sort  (cost=1127.54..1149.54 rows=8800 width=380) (actual time=28.526..32.826 rows=8800 loops=1)
         Sort Key: product_id
         Sort Method: external sort  Disk: 3360kB
         ->  Seq Scan on product_attributes2 pa  (cost=0.00..551.00 rows=8800 width=380) (actual time=0.010..7.231 rows=8800 loops=1)
 Planning time: 0.376 ms
 Execution time: 242.963 ms
(8 rows)

=> explain analyze select pa.product_id, json_agg(attributes) from product_attributes3 pa group by pa.product_id;
                                                              QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=1136.54..1240.62 rows=3046 width=387) (actual time=17.731..30.126 rows=3046 loops=1)
   Group Key: product_id
   ->  Sort  (cost=1136.54..1158.54 rows=8800 width=387) (actual time=17.707..20.705 rows=8800 loops=1)
         Sort Key: product_id
         Sort Method: external sort  Disk: 3416kB
         ->  Seq Scan on product_attributes3 pa  (cost=0.00..560.00 rows=8800 width=387) (actual time=0.006..5.568 rows=8800 loops=1)
 Planning time: 0.181 ms
 Execution time: 31.276 ms
(8 rows)

The only difference between the two tables is the type of the attributes column (jsonb vs json).  Each table contains the same 8800 rows.  Even running json_agg on the jsonb column seems to be faster:

=> explain analyze select pa.product_id, json_agg(attributes) from product_attributes2 pa group by pa.product_id;
                                                              QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=1127.54..1231.62 rows=3046 width=380) (actual time=30.626..62.943 rows=3046 loops=1)
   Group Key: product_id
   ->  Sort  (cost=1127.54..1149.54 rows=8800 width=380) (actual time=30.590..34.157 rows=8800 loops=1)
         Sort Key: product_id
         Sort Method: external sort  Disk: 3360kB
         ->  Seq Scan on product_attributes2 pa  (cost=000..551.00 rows=8800 width=380) (actual time=0.014..7.388 rows=8800 loops=1)
 Planning time: 0.142 ms
 Execution time: 64.504 ms
(8 rows)

Is it expected that jsonb_agg performance would be that much worse than json_agg?
Albe Laurenz | 29 Jan 16:17 2016
Picon

Hash join gets slower as work_mem increases?

I have a query that runs *slower* if I increase work_mem.

The execution plans are identical in both cases, except that a temp file
is used when work_mem is smaller.

The relevant lines of EXPLAIN ANALYZE output are:

With work_mem='100MB':
->  Hash Join  (cost=46738.74..285400.61 rows=292 width=8) (actual time=4296.986..106087.683
rows=187222 loops=1)
      Hash Cond: ("*SELECT* 1_2".postadresse_id = p.postadresse_id)
      Buffers: shared hit=1181177 dirtied=1, temp read=7232 written=7230

With work_mem='500MB':
->  Hash Join  (cost=46738.74..285400.61 rows=292 width=8) (actual time=3802.849..245970.049
rows=187222 loops=1)
      Hash Cond: ("*SELECT* 1_2".postadresse_id = p.postadresse_id)
      Buffers: shared hit=1181175 dirtied=111

I ran operf on both backends, and they look quite similar, except that the
number of samples is different (this is "opreport -c" output):

CPU: Intel Sandy Bridge microarchitecture, speed 2899.8 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask)
count 90000
samples  %        image name               symbol name
-------------------------------------------------------------------------------
  112       0.0019  postgres                 ExecProcNode
  3020116  49.9904  postgres                 ExecScanHashBucket
  3021162  50.0077  postgres                 ExecHashJoin
3020116  92.8440  postgres                 ExecScanHashBucket
  3020116  49.9207  postgres                 ExecScanHashBucket [self]
  3020116  49.9207  postgres                 ExecScanHashBucket
  8190      0.1354  vmlinux                  apic_timer_interrupt

What could be an explanation for this?
Is this known behaviour?

Yours,
Laurenz Albe

--

-- 
Sent via pgsql-performance mailing list (pgsql-performance <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Gmane