11:51:47 mc_ what's the best way to add another bigcouch to an existing cluster? 15:09:25 barney_: best way is to not do it, i.e. create a new cluster, move all data to it, then kill old cluster. 15:12:38 barney_: the reason is that while it's possible to add new nodes to an existing cluster, documents that have already been written previously will not re-propagate to the new node; only new documents will get spread over all the nodes including the new one. So although it should work fine, it puts you in a rather scary position that you may think you have the redundancy offered by the new node in addition to 15:12:44 your old ones, in reality, you only have partial redundancy from it. If enough of the old nodes go down, you will lose data, even though you might still have enough nodes up including the new node that you shouldn't. 15:18:04 barney_: ^ this is the way 15:18:22 couch does not currently support rebalancing dbs across nodes 15:18:28 though maybe 3.x might? haven't looked 15:19:08 well lookie there: https://docs.couchbase.com/server/current/learn/clusters-and-availability/rebalance.html 15:19:19 shit, that's couchbase lololo 15:19:22 ignore 16:10:28 ok thanks guys 16:14:34 mc_ a quick erlang...using foldl tyring to build a list of binaries i'm doing -> [Acc ++ NewElement] - but getting [[[<<"a">>|<<"b">>]<<"c">>]] 16:18:39 Acc is a list that you're wrapping in a new list? 16:18:53 hmm maybe i mean -> Acc ++ [NewElement] 16:18:58 typically you would foldl and return [NewElement | Acc] 16:19:10 oh ok 16:19:10 then lists:reverse/1 the result if you need it in diff order 16:19:45 lists:foldl(fun(El, Acc) -> [El | Acc] end, [], [a, b, c]) would give you [c, b, a] 16:20:20 but lists:reverse/1 is "fast" (its a c code bit inside the lists module) 16:20:35 so better to build a list in reverse order like this (since prepending items to a list is super fast 16:20:53 ok thanks 16:24:23 Numbers = lists:foldl(fun number_lookup_fold/2, [], Matches), 16:24:23 number_lookup_fold(Result, Acc) -> [kz_json:get_ne_binary_value(<<"value">>, Result) | Acc]. 16:24:45 would this work too? Numbers = [kz_json:get_ne_binary_value(<<"value">>, Match) || Match <- Matches, 16:25:55 yes, list comprehensions are the way to go 16:27:00 ok great . is it more efficient ? 16:28:01 LCs were faster in older versions, the gap is narrowing 16:28:19 but as long as the left side of || is small, i prefer them 16:28:46 LCs aren't that comprehendable :-D 16:29:11 makes my head implode.. lol 16:31:55 they're sugar for lists:map/2 basically 16:32:06 but you can do filtering and multiple generators 16:32:44 the hidden trick is if you pattern match in the generator, any elements that don't match will be ignored silently 16:33:12 so [Foo || {ok, Foo} <- [{ok, a}, {ok, b}, {error, c}]] would be [a, b] 16:33:43 but lists:map(fun({ok, Foo}) -> Foo end, [{ok, a}, {ok, b}, {error, c}]) would crash with function_clause error 16:34:10 which can be useful or cause mayhem if you're not ready for it :) 16:36:13 yes that does seem to have merit 16:43:17 >> barney_: best way is to not do it, i.e. create a new cluster, move all data to it, then kill old cluster. 16:43:18 what does move all the data over involve? 16:44:54 ruel ^^