Jul 032011
 

When doing games intended for viral distribution on the web, integrating with Facebook can be a bit cumbersome. The API seems to be primarily meant for used on a site embedded on a Facebook page, and whats worse, with a viral game you will not have control over the embedding page.

Here are a few tricks that will help you overcome those issues. However, there are still a few limitations:

  • Sending invites directly to a friend will not work, since loading the friend selection selection dialog will not be possible unless the embedding page is hosted on the canvas URL
  • It relies on use of ExternalInterface, and hence will not work if allowScriptAccess is set to “never”. The majority of gaming portals do allow external script access, but Kongregate doesn’t and very recently Newgrounds seems to have changed it so they don’t. Mindjolt does allow external script access, but asked me to remove the Facebook functionality.

Also, this is not a complete guide. You probably need decent knowledge of integrating Facebook with Flash to make much sense of it, but hopefully it could give you some ideas if you are struggling with how to get around the various issues involved with accessing the API when you have an application not hosted by yourself.

First of all, unless you plan to only have your swf only on sites that does allow script access you need to check if ExternalInterface works, and if it doesn’t disable your Facebook functionality. One would think that using ExternalInterface.available would let you know that, but it doesn’t tell you if the security sandbox will actually let you use ExternalInterface.
So instead I use the following code:

if (ExternalInterface.call("Date"))
{
	hasScriptAccess = true;
}

If ExternalInterface is available you can include the JS API in the embedding page and add the “fb-root” div. Also, I use a JS function to open the login pop-up. I use the following function to add the necessary JS to the embedding page :

private function addFBJS():void
{
	const script_js:XML =
		<script>
			<![CDATA[
			function () {						
				var body = document.getElementsByTagName('body')[0];
				var head = document.getElementsByTagName('head')[0]; 
				
				var fbDiv = document.createElement('div');
				fbDiv.setAttribute('id', 'fb-root');
				body.insertBefore(fbDiv, body.firstChild);
				
				var fbScript = document.createElement('script');
				fbScript.setAttribute('async', '');
				fbScript.setAttribute('type' ,'text/javascript');
				fbScript.setAttribute('src', 'http://connect.facebook.net/en_US/all.js');
				fbDiv.appendChild(fbScript);

				var fbWinScript = document.createElement('script');
				fbWinScript.setAttribute('type' , 'text/javascript');
				fbWinScript.text = "var fbWin = null; function fbWinIsClosed(){ return (fbWin == null || fbWin.closed); } function openLogin(url) { if (fbWinIsClosed()) { fbWin = window.open(url, 'fbwin', 'toolbar=0,menubar=0,resizable=1,width=800,height=480'); }}";
				head.appendChild(fbWinScript);
			}
			
			]]>
		</script>;
	ExternalInterface.call(script_js);
}

Just make sure to call that function before accessing the API.

One issue that turned out to be problematic was that when using Chrome I kept getting lots of warnings about x-domain access. To solve that you need to make sure that it uses the Flash protocol, instead of the default “fragments”.

I call the following function to handle browser detection and set the protocols as required. I have tested on IE, FF, Chrome and Opera. You might have to switch protocol on Safari as well, but I have not tested it and don’t know what will be used as default. Actual browser detection code is from http://www.quirksmode.org/js/detect.html

private function setXD():void
{
	const script_js:XML =
		<script>
			<![CDATA[
			var BrowserDetect = {
				init: function () {
					this.browser = this.searchString(this.dataBrowser) || "An unknown browser";
					this.version = this.searchVersion(navigator.userAgent)
						|| this.searchVersion(navigator.appVersion)
						|| "an unknown version";
					this.OS = this.searchString(this.dataOS) || "an unknown OS";
				},
				searchString: function (data) {
					for (var i=0;i<data.length;i++)	{
						var dataString = data[i].string;
						var dataProp = data[i].prop;
						this.versionSearchString = data[i].versionSearch || data[i].identity;
						if (dataString) {
							if (dataString.indexOf(data[i].subString) != -1)
								return data[i].identity;
						}
						else if (dataProp)
							return data[i].identity;
					}
				},
				searchVersion: function (dataString) {
					var index = dataString.indexOf(this.versionSearchString);
					if (index == -1) return;
					return parseFloat(dataString.substring(index+this.versionSearchString.length+1));
				},
				dataBrowser: [
					{
						string: navigator.userAgent,
						subString: "Chrome",
						identity: "Chrome"
					},
					{ 	string: navigator.userAgent,
						subString: "OmniWeb",
						versionSearch: "OmniWeb/",
						identity: "OmniWeb"
					},
					{
						string: navigator.vendor,
						subString: "Apple",
						identity: "Safari",
						versionSearch: "Version"
					},
					{
						prop: window.opera,
						identity: "Opera"
					},
					{
						string: navigator.vendor,
						subString: "iCab",
						identity: "iCab"
					},
					{
						string: navigator.vendor,
						subString: "KDE",
						identity: "Konqueror"
					},
					{
						string: navigator.userAgent,
						subString: "Firefox",
						identity: "Firefox"
					},
					{
						string: navigator.vendor,
						subString: "Camino",
						identity: "Camino"
					},
					{		// for newer Netscapes (6+)
						string: navigator.userAgent,
						subString: "Netscape",
						identity: "Netscape"
					},
					{
						string: navigator.userAgent,
						subString: "MSIE",
						identity: "Explorer",
						versionSearch: "MSIE"
					},
					{
						string: navigator.userAgent,
						subString: "Gecko",
						identity: "Mozilla",
						versionSearch: "rv"
					},
					{ 		// for older Netscapes (4-)
						string: navigator.userAgent,
						subString: "Mozilla",
						identity: "Netscape",
						versionSearch: "Mozilla"
					}
				],
				dataOS : [
					{
						string: navigator.platform,
						subString: "Win",
						identity: "Windows"
					},
					{
						string: navigator.platform,
						subString: "Mac",
						identity: "Mac"
					},
					{
						   string: navigator.userAgent,
						   subString: "iPhone",
						   identity: "iPhone/iPod"
					},
					{
						string: navigator.platform,
						subString: "Linux",
						identity: "Linux"
					}
				]

			};
			BrowserDetect.init();

			function() {
				if (BrowserDetect.browser == 'Chrome') {
					FB.XD._origin = window.location.protocol + '//' + document.domain + '/' + FB.guid();
					FB.XD.Flash.init();
					FB.XD._transport = 'flash';
					
				} else if (BrowserDetect.browser == 'Opera') {
					FB.XD._transport = 'fragment';
					FB.XD.Fragment._channelUrl = window.location.protocol + '//' + window.location.host + '/'
				} 
			}
			]]>
		</script>;
	ExternalInterface.call(script_js);
}

To handle the login you need to host a script on the host used for the canvas URL. It’s unfortunate that one needs external dependencies, but otherwise you need to include you app secret in your swf, which is not a great idea.
I use the following PHP script:

<?php
    // your app id
    $app_id = "xxxxxxxxxxxxxxxxxx"; 
    // your app secret
    $app_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"; 
    // path to this script, located under the app URL
    $my_url = "http://example.com/facebookLogin.php";

    session_start();
	
    $code = $_REQUEST["code"];
    if (empty($code) && !isset($_GET["getstatus"])) 
	{
        $_SESSION['state'] = md5(uniqid(rand(), TRUE)); //CSRF protection
		
		$dialog_url = "https://www.facebook.com/dialog/oauth?client_id=" 
			. $app_id . "&redirect_uri=" . urlencode($my_url) . "&state="
							. $_SESSION['state'];
		echo("<script> top.location.href='" . $dialog_url . "'</script>");
    } else if ($_SESSION['state']) {
			if (!empty($code))
			{
				$_SESSION['code'] = $code;
			}
			if($_SESSION['code'])
			{
				$token_url = "https://graph.facebook.com/oauth/access_token?"
				  . "client_id=" . $app_id . "&redirect_uri=" . urlencode($my_url)
				  . "&client_secret=" . $app_secret . "&code=" . $_SESSION['code'];

				$response = file_get_contents($token_url);
				
				$params = null;
				parse_str($response, $params);

				$graph_url = "https://graph.facebook.com/me?access_token=" 
				  . $params["access_token"];

				$user = json_decode(file_get_contents($graph_url));
				if(isset($_GET["getstatus"]))
				{
					echo "session=true&uid=".$user->id."&userName=".$user->name;
				} else
				{	
					echo("<script>setTimeout(top.location.href='channel.html', 1000);</script>");
				}
			} else
			{
				echo "session=false";
			}
	} else 
	{
		if(isset($_GET["getstatus"]))
		{
			echo "session=false";
		} else
		{
			echo("<script>setTimeout(top.location.href='channel.html', 1000);</script>");
		}
	}
?>

Notice the “channel.html”. This is a simple HTML file used for two purposes. It acts as a cross-domain scripting channel, and it self-closes the window.
The contents of the file looks like this:

<script src="http://connect.facebook.net/en_US/all.js"></script>
<script>self.close();</script>

Name that “channel.html” and put it at your canvas URL.

Now, to log in in your flash application you have to first initialize:

Facebook.init(_appId, onFBInit, {channelUrl:_appUrl + "channel.html"} );

protected function onFBInit(result:Object,fail:Object):void
{
	if (!result) return;
	_session = result as FacebookSession;
	if (result && result.user)
	{
		_loggedIn = true;
		_session = new FacebookSession();
		_session.user = { name:result.user.name, id:result.user.id };
	} 	
}

_appId is of course the application id you got from Facebook, and _appUrl is the canvas URL where you uploaded “channel.html”.

The Facebook.init callback can be a bit unreliable, but if it works onFBInit will store any current session to avoid the need to log in again.

Then to log in the user, check if the session user is set, otherwise open the popup. Also set a timer to check with PHP when the session cookie has been set.
I use TweenLite for the timer, AS3Signals for events and Destroytoday’s promise to handle async responses. But this is just to give you an idea about how the login process will work, and of course you can use the built in timer and events instead:

public function login(caller:String=null):Promise
{
	_loginPromise = new Promise();
	_checkLoginAttempt = 0;
	if (!_session || !_session.user) 
	{
		openLogin(_appUrl + "facebookLogin.php");
	}
	TweenLite.delayedCall(8, checkLogin);
	return _loginPromise;
}

private function openLogin(loginUrl:String):void
{
	ExternalInterface.call("openLogin", loginUrl);
}

private function checkLogin():void
{
	if (!_session || !_session.user)
	{
		var req:URLRequest = new URLRequest(_appUrl + "facebookLogin.php?getstatus=1");
		var loader:URLLoader = new URLLoader(req);
		var loadedSignal:NativeSignal = new NativeSignal(loader, Event.COMPLETE);
		loadedSignal.add(onCheckLogin);
	} else
	{
		start();
	}
}

private function onCheckLogin(e:Event):void
{
	var loader:URLLoader = e.currentTarget as URLLoader;
	var vars:URLVariables;
	if (_checkLoginAttempt == 5)
	{
		var winClosed:Boolean = ExternalInterface.call("fbWinIsClosed");
		if (winClosed == true)
		{
			_loginPromise.dispatchError("Could not read Facebook cookie. Please try again.");
			_loginPromise.dispose();
			return;
		}
	}
	if (loader.data) vars = new URLVariables(loader.data);
	if (vars && vars.session == "false" && vars.isset == "true")
	{
		_loginPromise.dispatchError("Could not log in to Facebook. Cookie not valid and has been reset. Please try again.");
		_loginPromise.dispose();
		return;
	}
	if (!vars || !vars.session || vars.session == "false")
	{
		_checkLoginAttempt++;
		TweenLite.delayedCall(2, checkLogin);
	} else
	{
		_session = new FacebookSession();
		_session.user = { name:vars.userName, id:vars.uid };
		start();
	}
}

Now everything should be set up and you can do your API calls as usual. For example to post to the wall:

public function postToWall(name:String, 
				    prompt:String, 
				    message:String, 
				    caption:String, 
				    link:String, 
				    imagePath:String):void
{
	var o:Object = 
	{
		user_message_prompt: prompt,
		message: message,
		attachment: 
		{
			media: [{
				type: "image",
				href: link,
				src: imagePath
			}],
			name: name,
			href: link,
			caption: caption,
			description: ""
		},
		action_links: 
		[{ 
		   text: prompt, 
		   href: link 
		}],
		next:_appUrl + "channel.html"
	};
	Facebook.ui("stream.publish", o, null, "popup");
}

As you can see it’s all very obvious, simple and straightforward ;)

I just hope Google+ will take over from Facebook in the not too distant future, since I can’t imagine that their API will be anywhere nearly as tedious to work with.
And of course, keep in mind that this information will likely be outdated in a few months when Facebook decides to make changes to their API once again.

Share/Bookmark
Oct 232010
 

Pixel Bender can be a great tool for number crunching, but has a couple of limitations when trying to create an audio mixer.

  • The number of inputs is limited when using PixelBender toolkit to compile .pbj files.
  • Even if one overcomes the limitation by writing Pixel Bender assembly, the track count is fixed to a predetermined number determined by the number of inputs you have in the .pbj.

In some cases you might want your application to act as a normal audio app where you can add as many tracks as your CPU can handle, but not use more CPU than needed for the current track count.

What we need to accomlish that is a way to dynamically create a shader with the desired number of inputs. James Ward has created pbjAS, a library that enables you to create Pixel Bender shaders at runtime, based on Nicolas Cannasees haXe library.
Also, Tinic Uro has posted Pixel Bender assembly code to create a mixer.

So I set out to use pbjAS to recreate Tinics code with a dynamic number of channels.
The result is this class:

package com.blixtsystems.audio
{
	import flash.display.Shader;
	import flash.display.ShaderJob;
	import flash.utils.ByteArray;
	import pbjAS.ops.OpAdd;
	import pbjAS.ops.OpMul;
	import pbjAS.ops.OpSampleNearest;
	import pbjAS.params.Parameter;
	import pbjAS.params.Texture;
	import pbjAS.PBJ;
	import pbjAS.PBJAssembler;
	import pbjAS.PBJChannel;
	import pbjAS.PBJParam;
	import pbjAS.PBJType;
	import pbjAS.regs.RFloat;
	/**
	 * Shader to mix audio with a dynamic number of channels
	 * @author leo@blixtsystems.com
	 */
	public class MixerShader
	{
		private var _bufferSize:int;

		private var _pbj:PBJ = new PBJ();
		private var _shader:Shader;
		private var _buffer:Vector.<ByteArray> = new Vector.<ByteArray>();

		private var _numTracks:int;

		/**
		 * Constructor
		 * @param	numTracks	track count
		 */
		public function MixerShader(numTracks:int, bufferSize:int=2048)
		{
			_numTracks = numTracks;
			_bufferSize = bufferSize;
		}

		/*-----------------------------------------------------------
		Public methods
		-------------------------------------------------------------*/
		/**
		 * Mix audio
		 * @param	data	ByteArray in which to store the result, probably SampleDataEvent.data
		 */
		public function mix(data:ByteArray):void
		{
			var mixerJob:ShaderJob = new ShaderJob(_shader, data, 1024, _bufferSize/1024);
			mixerJob.start(true);
		}

		/*-----------------------------------------------------------
		Private methods
		-------------------------------------------------------------*/
		private function assembleShader():void
		{
			var channels:Array = [PBJChannel.R, PBJChannel.G];
			var chanStr:String = "rg";
			_pbj.version = 1;
			_pbj.name = "SoundMixer";
			_pbj.parameters =
			[
				new PBJParam
				(
					"_OutCoord",
					new Parameter
					(
						PBJType.TFloat2,
						false,
						new RFloat(	0, channels)
					)
				)

			];
			_pbj.code =
			[
				new OpSampleNearest
				(
					new RFloat(1, channels),
					new RFloat(0, channels),
					0
				),
				new OpMul
				(
					new RFloat(1, channels),
					new RFloat(3, channels)
				)
			];
			var i:int;
			for (i = 0; i < _numTracks; i++)
			{
				_pbj.parameters.push
				(
					new PBJParam
					(
						"track" + i,
						new Texture(2, i)
					)
				);
			}
			for (i = 0; i < _numTracks; i++)
			{
				_pbj.parameters.push
				(
					new PBJParam
					(
						"volume" + i,
						new Parameter
						(
							PBJType.TFloat2,
							false,
							new RFloat(i + 3, channels)
						)
					)
				);
			}
			for (i = 0; i < _numTracks-1; i++)
			{
				_pbj.code.push
				(
					new OpSampleNearest
					(
						new RFloat(2, channels),
						new RFloat(0, channels),
						i+1
					),
					new OpMul
					(
						new RFloat(2, channels),
						new RFloat(i+4, channels)
					),
					new OpAdd
					(
						new RFloat(1, channels),
						new RFloat(2, channels)
					)
				);
			}
			_pbj.parameters.push
			(
				new PBJParam
				(
					"output",
					new Parameter
					(
						PBJType.TFloat2,
						true,
						new RFloat(1, channels)
					)
				)
			);

			var pbjBytes:ByteArray = PBJAssembler.assemble(_pbj);
			_shader = new Shader(pbjBytes);
			_buffer = new Vector.<ByteArray>(_numTracks);

			// initialize the shader inputs
			for (i = 0; i < _numTracks; i++) {
				_buffer[i] = new ByteArray();
				_buffer[i].length = _bufferSize * 4 * 2;
				_shader.data["track" + i]["width"] = 1024;
				_shader.data["track" + i]["height"] = _bufferSize / 1024;
				_shader.data["track" + i]["input"] = _buffer[i];
				_shader.data["volume" + i]["value"] = [1, 1];
			}

		}

		/*-----------------------------------------------------------
		Getters/Setters
		-------------------------------------------------------------*/
		public function get numTracks():int { return _numTracks; }
		public function set numTracks(value:int):void
		{
			// needs to be at least one input, and no point reassembling pbj if track count has not changed
			if (_numTracks < 1 && _numTracks == value) return;

			_numTracks = value;
			assembleShader();
		}

		public function get buffer():Vector.<ByteArray> { return _buffer; }

	}

}

To use it you need to download pjbAS from James Ward and include the swc in your project.

Then in your audio engine do the following:

static public const BUFFER_SIZE:int = 2048;

// number of tracks (minimum 1)
private var _numTracks:int = 1;
// instantiate mixer shader
private var _mixerShader:MixerShader;
_mixerShader = new MixerShader(_numTracks, BUFFER_SIZE);

// here you have your sound objects stored
// adding or removing objects will change number of tracks in the shader
public var sounds:Vector.<Sound> = new Vector.<Sound>();

// SampleDataEvent callback
private function samplesCallback(e:SampleDataEvent):void
{
	_numTracks = sounds.length;

	// update number of track for the shader, causing pbj to recompile
	_mixerShader.numTracks = _numTracks;

	// extract audio data into the shader buffers
	for (var i:int = 0; i <  _numTracks; i++)
	{
		_mixerShader.buffer[i].position = 0;
		sound[i].extract(_mixerShader.buffer[i], BUFFER_SIZE);
		_mixerShader.buffer[i].position = 0;
	}
	// do shader job
	_mixerShader.mix(e.data);
}

According to Tinic Uro, the number of inputs in a shader is limited to 15, so adding more tracks than that will probably not work.

In my tests, playing back eight tracks using pure AS3 will often cause CPU peak usage over 16% of one 2.3 GHz core on my quad-core machine with the debug version of the Flash Player.
If I instead use Pixel Bender the peaks are rarely above 6%. Adding or removing tracks which causes the pbj to recompile, does not cause any noticeable spikes in CPU usage.
So using Pixel Bender will cut CPU usage to around 1/3 on my machine!

Big thanks to Tinic Uro, Nicolas Cannasse and James Ward who made it easy to accomplish!

May 302008
 

Playing around with the new Flash player 10 audio processing functionality the need for optimization becomes very apparent when you want to apply effects to several tracks of audio.

With a sample rate of 44100 and a dozen stereo tracks we are talking over a million samples to be processed per second where each process you apply will have probably at least some 30 operations. All of a sudden the great performance of AVM2 becomes quite limiting.

So it’s important to squeeze out every drop of performance you can by optimizing the code.
First of all I have been benchmarking the performance of running code inside the processing loop, in a function, in an external class and inside a loaded swf (would have been neat for the possibility to plug in effects without recompiling the main swf).

The code I used for testing was to process a value and return it like this (obviously without function enclosure when doing the processing in the local scope):

public function calculate(num:Number):Number
{
	return num * 1.01;
}

The time needed in ms when calling the function 10 000 000 times:

  • Locally: 46
  • Calling a function in the same class: 213
  • Calling a function in a separate class: 213
  • Calling a function in an externally loaded swf: 2347

Not so surprising results.
Having processing code in an external swf is obviously not an option. I tried with both simply sticking a function in the swf or in a class which I retrieved by applicationDomain.getDefinition and both methods performed equally bad.
Doing processing locally instead of in a separate function or class is a lot faster, but obviously that could easily becomes very cumbersome and ugly.
At least there is nothing lost on having the function in a separate class compared to having a function in the same class.

Something that does surprise me a bit is that when just calling the function once and having the loop inside the function instead the resulting time was 75ms.
That’s about 30 ms added for just one function call so it seems like the first call is a lot more expensive.

One would think that the conclusion is that the best approach when processing audio if one like to avoid placing the code inside the samplesCallbackEvent loop seems to be calling the processing code once and then iterate over the size of the buffer in the class for the effect.
This is exactly what I was suggested by Spender when I posted a 3-band EQ example.

The problem there and why my attempt at implementing his suggestion failed at making an improvement is that reading and writing floats in a ByteArray is slower than the function calls.
Testing to writeFloat 10 000 000 times to then read loop through them to read them again takes 1727 ms. So compared with the 213 ms doing the same amount of function calls it’s clear that function calls actually is comparatively cheap. A Vector fares a bit better then the ByteArray with 1239 ms.

So the optimal approach seems to be to only do samples.readFloat once then use the returned value doing function calls for each process you like to apply before you do samplesCallbackData.writeFloat

May 192008
 

I have had reason to play around with some of the new functionality of Flash Player 10 and the vectors is just awesome.
On top of the benefits of the strict typing they are about 50% faster than Arrays according to my tests.

Being completely new to the concept, how to create multidimensional vectors was not completely obvious since you need to type every dimension when declaring the bottom level dimension:

var v1:Vector.>>= new Vector.>>();
var v2:Vector.> ;
var v3:Vector.;
for (var i:int = 0; i < 10; i++) {
v2 = new Vector.>();
for (var ii:int = 0; ii < 10; ii++) {
v3 = new Vector.();
for (var iii:int = 0; iii < 10; iii++){
v3[iii] = iii;
}
v2[ii]=v3;
}
v1[i]=v2;
}

 

So far I have mostly been experimenting with the new samplesCallbackData to create a little mixer.
It seems like one needs to bypass the mixer in the Flash Player if one want to write to the output buffer because creating several sounds and then doing samplesCallbackData.writeFloat() on them will not work.
Of course each channel doesn’t have its own output buffer, so you can only write to the master output.
The problem I’m having with this is that if one would like to have a level meter for each individal track I cannot figure out a way to determine what sample is currently being output.
Here is a simplified version of how I implemented the mixing in the SamplesCallbackEvent handler:

var i:int = 0, l:int = _sounds_vect.lengt;
while (i < l) {
samples = new ByteArray();
samples.position = 0;
snd = _sounds_vect[i];
snd.extract(samples, _bufferSize);
_samples_vect[i] = samples;
_samples_vect[i].position = 0;
i++;
}
while (c < _bufferSize) {
left = 0;
right = 0;
i = 0;
while (i < l) {
valL = _samples_vect[i].readFloat();
valR = _samples_vect[i].readFloat();
left += valL;
right += valR;
i++;
}
valL = left;
valR = right;
_out_snd.samplesCallbackData.writeFloat(valL);
_out_snd.samplesCallbackData.writeFloat(valR);
c++;
}

So the audio is mixed in chunks the size of the buffer and then written to the buffer using samplesCallbackData.writeFloat().

 

For the main output I can create a level meter using:

_out_chn = new SoundChannel();
_out_chn = _out_snd.play();
function onEnterFrame(e:Event):void {
_masterSlider.drawLevel(_out_chn.leftPeak, _out_chn.rightPeak);
}

 

But for the individal channels I will never issue a play() and hence cannot find a way to get a really consistently behaving level meter or spectrum.
I’m sure there is some clever way to do it that is escaping me.

 

What I currently do is to get the value in the mixing loop like so:

while (i < l) {
valL = _samples_vect[i].readFloat();
valR = _samples_vect[i].readFloat();
left += valL;
right += valR;
if (c == 0){ // tried with (c == _bufferSize - 1) and (c == _bufferSize/2) as well
_levelR_vect[i] = valL;
_levelL_vect[i] = valR;
}
i++;
}

I then use the _levelL_vect and levelR_vect values in my onEnterFrame handler to draw the bars, but the result is a lot less accurate than what is possible using my_chn.leftPeak and far from satisfactory.
I guess what I would need is a way to be able to tell what sample from the output buffer that is playing a certain moment in time.

 

Apart from that small issue it’s great to have the functionality to generate and process audio and I think we will see some very cool applications appearing eventually.

May 162008
 

Finally Flash will have built in ability to access the sound output buffer when using FlashPlayer 10 that just has been released.
Tinic Uro have posted a little information about the implementation.
So no more relying on complicated hacks, this is all the code you will need to generate a sine wave (snipped from Tinics post):

var sound:Sound = new Sound();
function sineWavGenerator(event:SamplesCallbackEvent):void {
for ( var c:int=0; c<1234; c++ ) {
var sample:Number = Math.sin(
(Number(c+event.position)/Math.PI/2))*0.25;
sound.samplesCallbackData.writeFloat(sample);
sound.samplesCallbackData.writeFloat(sample);
}
}
sound.addEventListener(“samplesCallback”,sineWavGenerator);
sound.play();

May 092008
 

I added a little application to the download section.
It takes a Gregorian date and converts it to a dreamspell date and then calculates the kin, guide, analog and antipode to generate the affirmation for that day.
For those of you not familiar with the dreamspell calendar it’s a reinterpretation of the Mayan calendar by Dr. José Argüelles.
It’s based around 20 “glyphs” and 13 “tones” which make up the 260 day year called the Tzolkin.
The idea behind it is to create a new calendar system that actually is in tune with natural cycles, unlike the Gregorian system we use today, but mostly it’s used to cast horoscopes.
Here is one resource for more information about the calendar.

My wife wanted to have an neat application for displaying the affirmation of the day and to look up a persons birth “kin”, so I put this one together for her:

If this sort of stuff interest you are free to hotlink to the swf using the following URL:

http://www.resonantearth.com/ingrid/dreamspell.swf

You can download the swf here.

To get the source visit the download page.

Oct 192005
 

Many thanks to Guy Watson who confirmed that the FFTMode for computeSpectrum doesn’t work yet.
I was hoping to be able to post a proper spectral analyser for Flash, but that has to wait until the FFT is sorted.

So for now you can check out the code for the example waveform display I made.
It’s pretty much the same as what was posted on www.richapps.de but I used the readFloat method to access the data to make the wave display correctly.

Oct 192005
 

I was very excited to notice that you now with AS3 can retrieve amplitude and spectrum information. I been hoping for this for a long time, and now it’s there :)
Problem is that it doesn’t seem to work properly.

I looked at the example on www.richapps.de and tried it out.
The problem is that accessing the values of the byteArray with normal array access brackets will not give the correct values.
It will return the byte value, ie a value between 0 and 256. According to the documentation for AS3 the computeSpectrum should return a decimal value between -1.0 and 1.0.
So since the length of the byte array is 2048 and the number of values should be 2×256 the values is stored as a single-precision 32-bit value.
So that gives that each value should be stored in 4 bytes and that readFloat would be the appropriate method to retrieve the values.
So my code to read the array for the right channel looks like this:

while(i>1024){
   spectrum.position=i;
      sprites_array[Math.round(i/4)].scaleY=(spectrum.readFloat()*100);
   i=i+4;
}

That works fine when FFTMode is false.
It will display the raw wave as you can see in this example.
(requires flash player 8.5, click to start sound).

But what you usually want to do to display a spectrum is to use FTT.
That will analyse the wave and actually make it into spectral information.
So, I try with just setting FFTMode to true….but no luck :(
Here is the swf with FFT applied.
As you can see there is no spectral information. With a sine sweep like this what you should see is a thin spike travelling from right to left.

Am I doing it completely wrong, or is indeed the FFTMode not working as it should?
Someone managed to get it to display a proper spectrum graph yet, and if so how?

Switch to our mobile site