Update! current Win standalone and Max/MSP source package (32.0MB): social_sampler_catart_0.49_win (321)
You’ve managed to get started in all this quite impressively – at least that is how it looks like on the youtube material <: Although all of what you’re trying to implement is a bit over anyone’s head if taken literally, but, as you’ve satated, learning a LOT is probably the more important aspect here.
Now you know me well, eleven, and I think it would be of great benefit to put a bit of scientific method into this work. Again, it’s fascinating to see that you’ve got a working prototype running but the method tells us to separate all the involved processes and to understand (develop models in your case) them in a controlled environment (meaning not using such complicated sources) first, then apply some kind of real world input (signal if that’s your fancy) then a bit more… Only after all that you’ll be able to have some kind of idea of how your system will perform.
You could skip all that and go for the best result via a simple trial and error path, but I don’t believe this could lead to any kind of stable, yet dynamic enough system. To some extent this is what I had in mind pointing out that “the randomness is shifted from the net to the user, and that’s not it, righ” in my last post. I’ve had some experience dealing with noisy signals and simply looking at them and fixing them on a case-to-case basis usually does not end up with any kind of robust algorithm.
My idea would be to distinguish basic stages of your system, then formulate problems (kind of black box transfer functions: if this goes in, then that should go out) for each of them, then start with some basic solutions, assemble it all and see how it goes. That kind of subdivision would provide you with not only this lovely scientific method I’m talking about, but also means for other people (such as myself) to add input at points they are capable in. To put some perspective in it I give a crude example of how I see it:
Stage 1: DATA ACQUISITION AND PROCESSING
a) Social networks (twitter, youtube, etc.), a microphone outside your window, etc.
b) Lookup phrase, spectral transfer function on the microphone, etc.
c) Constraining parameters (lookup time, content-related options, number of numeric outputs)
a1..aN) Numeric outputs (some kind of numerical evaluation of the lookup – update time, mean post length, geographic location, standard deviation of time between posts)
b) Related text output
c) Related video output
d) Related audio output
Stage 2: AUDIO DATA MANIPULATION (a delay line in my example)
a) Sample input (audio data & position in delay line)
b) Delay line properties (modification of transfer function and length of line)
a) Delay line output
Stage 3: SYSTEM EVOLUTION
a) All the above
b) Output of the system as a whole (THE output)
a) Correction factors for all the modules
This module being of a negative feedback type provides the user with a level of control over the output of the system and keeps it inside some kind of boundaries (amplitude, time-dependant variation, spectral characteristics etc.). Genetic evolutionary algorithm could be implemented here as well to allow the system, by adjusting all the parameters, look for interesting modes of operation itself.
This is very important: firstly, it would throttle down the system if it started to update the output to frequently, say 10 times per second, (and that will happen if it is based on random input), afterwards the restrictions would lift up slowly allowing the system to regain without user intervention; secondly, it could turn out to be a source of interesting data and patterns itself (being a evolving system).
END OF EXAMPLE
What do you think, eleven? I’d really like to help so deciding on a clear task would point me in a right direction.
By the way, would you like to keep this discussion on your site, or should we move to e-mail, because the posts are getting lenghty.
It’s kind of awkward because we’ve not stayed in touch for some time but somehow I really think I’m getting your idea.”
“Sounds complicated but interesting enough to try some of that out. I’m still curious about how much of actual random data versus actual deterministic signal generation would you like to use and how exactly that data should morph your signal. Making it all work wouldn’t be as hard as devising a proper algorithm that not requires the user constantly adjusting the weight of probabilistic influence (if so, the randomness is shifted from the net to the user, and that’s not it, right).
Anyway, my math and physics skills are at your disposal, eleven. At least here and there, in my usual manner of undetermined appearance on the net.”
“hey, nice to see you on here! well, I am going to write my personal mumblings to you soon, sorry about the delay, you probably understand why :>
anyway, back to the social sampler. as you might have seen I am going trough the process of extracting the DATA from twitter feeds, a proper semantic algorithm is needed to be able to get only relevant data. I am curious myself too how that data will morph the signal and what the exact approach will be taken to control randomness versus static. well, the randomness which I would like to see should be coming from the twitter/youtube itself as they are being constantly updated. basically, a user will be able to input a word/phrase and that will nicely determine what content is coming down so a user can concentrate on actually remixing it. the remix part is second stage of this development but it may become integrated into the whole data analysis stream much sooner then expected – I like to experiment with various things at one time.
the thing which confuses me is the fact that I may be doing to much work trying to sort out twitter search results while I can actually get more macroscopic data such as trends. however I believe that by doing all of this sorting I am learning a LOT. I was inspired to read some great O’Reilly house books on Ambient Findability, Processing and Data Visualization. learning is slow but really rewarding.
Solarsd, I will love to use your skills as I start implementing semantic analysis algorithms (or begin understanding working models built in max). Have you had any interest or understanding of Hidden Markov model? it seems to be right thing to use however I am not sure if it would be difficult to implement it, haven’t really done much research on it…
well my brain is fried so I am sorry if I am too abstract or just talking bollocks :> I am looking forward to your replies!”
“Your idea stands right in the middle of what’s currently going on in music & web. It really seems possible to make textual info from social media to act as a modulator: beside the “taps” You’ve mentioned themselves, You can track the changes of initial “taps” – reposted/quoted/retweeted messages tend to warp and morph with time. But using these as realtime data can be difficult – even using them to, say, shift phase will require at least few messages per second – all on the same subject! There must be a very strong initial “signal” to make such reaction and make it last long enough. I haven’t develop with java since 2003, will try to refresh my skills and then offer help.”
In regard to real-time data – you are quite right but it really depends if it is used to, as you say, shift phase or as a modulator variable which can initialize some calculations. Also, I started to think that the main point of this development is not doing everything in real-time but analysing and mapping existing and emerging data; just by doing simple search on twitter you get results which are recent enough. Moreover, it may be interesting to extract single words or any other data and look them up so we get more results. Doing this again and again maintains the constant level of feedback. User could decide the amount of such (mainly) irrelevant data to use (simple knob to control a ‘filter’ resolution, in our case number of matching phrases, words, morphemes).
Another idea which could be implemented (will do research soon) is to transform video to sound (kinda like this:http://faculty.washington.edu/dillon/PhonResources/javoice/vowjavoice2.html but applied to moving images) and then to convolute it by a sound of that video. Or it could be any other requested sound which convolutes it! Or if it’s not possible to develop video to sound code then simple sound convolution would be good still. Audio taken from youtube video convoluted by other audio of a relevant video, what would that sound like? I need to try this out soon.
Above is just a starting point however a good one. Let’s work from here.”
Internet is an amplifier of the noise, an echo-chamber, a big delay processor. Take trends and posts on twitter and imagine them as taps within a delay, tails fed back around creating massive noise.
Truncated phrases, taken off social media, sourced from videos, transformed into spectral noise which fluctuates in real-time.
Delay/reverb/echo-chamber? A post is an initial signal which gets repeated over many retweets. These retweets mutate/disappear/bounce. Hashtags represent reflections, bounces, moments of happening.
Who started what ? Can a sound generated out of twitter noise can be of any use? How filtering RSS is similar to filtering audio through lp/hp/bp/whatever? How about compressing/limiting the noise? However if it is noise are there any dynamics…? How one create a mix of elements? Who creates filters? Can I transform data of social media into music? Would it be evolving or predetermined as filtering is? May I find it difficult to implement evolutionary processes into this music? Do I actually need them if the evolution happens real-time online?
I will try my best to create a “social sampler” using Max/MSP, some Java, some APIs.
I need to connect to the twitter API and source the lists of trends, track streams of posts. Then map this data to (let’s say) relevant videos on youtube via tags, popularity, titles etc. Videos are then streamed to a Java decoder which extracts audio. Audio is passed to Max/MSP….
All of this in real-time.
to edit this page, please submit your username (register) here:
[contact-form 3 "collaboration request"]