PokéRogue
    Preparing search index...

    Interface AudioContext

    The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode.

    MDN Reference

    interface AudioContext {
        audioWorklet: AudioWorklet;
        baseLatency: number;
        currentTime: number;
        destination: AudioDestinationNode;
        listener: AudioListener;
        onstatechange: ((this: BaseAudioContext, ev: Event) => any) | null;
        outputLatency: number;
        sampleRate: number;
        state: AudioContextState;
        addEventListener<K extends "statechange">(
            type: K,
            listener: (this: AudioContext, ev: BaseAudioContextEventMap[K]) => any,
            options?: boolean | AddEventListenerOptions,
        ): void;
        addEventListener(
            type: string,
            listener: EventListenerOrEventListenerObject,
            options?: boolean | AddEventListenerOptions,
        ): void;
        close(): Promise<void>;
        createAnalyser(): AnalyserNode;
        createBiquadFilter(): BiquadFilterNode;
        createBuffer(
            numberOfChannels: number,
            length: number,
            sampleRate: number,
        ): AudioBuffer;
        createBufferSource(): AudioBufferSourceNode;
        createChannelMerger(numberOfInputs?: number): ChannelMergerNode;
        createChannelSplitter(numberOfOutputs?: number): ChannelSplitterNode;
        createConstantSource(): ConstantSourceNode;
        createConvolver(): ConvolverNode;
        createDelay(maxDelayTime?: number): DelayNode;
        createDynamicsCompressor(): DynamicsCompressorNode;
        createGain(): GainNode;
        createIIRFilter(feedforward: number[], feedback: number[]): IIRFilterNode;
        createIIRFilter(
            feedforward: Iterable<number>,
            feedback: Iterable<number>,
        ): IIRFilterNode;
        createMediaElementSource(
            mediaElement: HTMLMediaElement,
        ): MediaElementAudioSourceNode;
        createMediaStreamDestination(): MediaStreamAudioDestinationNode;
        createMediaStreamSource(
            mediaStream: MediaStream,
        ): MediaStreamAudioSourceNode;
        createOscillator(): OscillatorNode;
        createPanner(): PannerNode;
        createPeriodicWave(
            real: Float32Array<ArrayBufferLike> | number[],
            imag: Float32Array<ArrayBufferLike> | number[],
            constraints?: PeriodicWaveConstraints,
        ): PeriodicWave;
        createPeriodicWave(
            real: Iterable<number>,
            imag: Iterable<number>,
            constraints?: PeriodicWaveConstraints,
        ): PeriodicWave;
        createScriptProcessor(
            bufferSize?: number,
            numberOfInputChannels?: number,
            numberOfOutputChannels?: number,
        ): ScriptProcessorNode;
        createStereoPanner(): StereoPannerNode;
        createWaveShaper(): WaveShaperNode;
        decodeAudioData(
            audioData: ArrayBuffer,
            successCallback?: DecodeSuccessCallback | null,
            errorCallback?: DecodeErrorCallback | null,
        ): Promise<AudioBuffer>;
        dispatchEvent(event: Event): boolean;
        getOutputTimestamp(): AudioTimestamp;
        removeEventListener<K extends "statechange">(
            type: K,
            listener: (this: AudioContext, ev: BaseAudioContextEventMap[K]) => any,
            options?: boolean | EventListenerOptions,
        ): void;
        removeEventListener(
            type: string,
            listener: EventListenerOrEventListenerObject,
            options?: boolean | EventListenerOptions,
        ): void;
        resume(): Promise<void>;
        suspend(): Promise<void>;
    }

    Hierarchy

    Index

    Properties

    audioWorklet: AudioWorklet

    The audioWorklet read-only property of the processing. Available only in secure contexts.

    MDN Reference

    baseLatency: number

    The baseLatency read-only property of the seconds of processing latency incurred by the AudioContext passing an audio buffer from the AudioDestinationNode — i.e., the end of the audio graph — into the host system's audio subsystem ready for playing.

    MDN Reference

    currentTime: number

    The currentTime read-only property of the BaseAudioContext interface returns a double representing an ever-increasing hardware timestamp in seconds that can be used for scheduling audio playback, visualizing timelines, etc.

    MDN Reference

    The destination property of the BaseAudioContext interface returns an AudioDestinationNode representing the final destination of all audio in the context.

    MDN Reference

    listener: AudioListener

    The listener property of the BaseAudioContext interface returns an AudioListener object that can then be used for implementing 3D audio spatialization.

    MDN Reference

    onstatechange: ((this: BaseAudioContext, ev: Event) => any) | null
    outputLatency: number

    The outputLatency read-only property of the AudioContext Interface provides an estimation of the output latency of the current audio context.

    MDN Reference

    sampleRate: number

    The sampleRate property of the BaseAudioContext interface returns a floating point number representing the sample rate, in samples per second, used by all nodes in this audio context.

    MDN Reference

    The state read-only property of the BaseAudioContext interface returns the current state of the AudioContext.

    MDN Reference

    Methods

    • The close() method of the AudioContext Interface closes the audio context, releasing any system audio resources that it uses.

      MDN Reference

      Returns Promise<void>

    • The createBiquadFilter() method of the BaseAudioContext interface creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types.

      MDN Reference

      Returns BiquadFilterNode

    • The createBuffer() method of the BaseAudioContext Interface is used to create a new, empty AudioBuffer object, which can then be populated by data, and played via an AudioBufferSourceNode.

      MDN Reference

      Parameters

      • numberOfChannels: number
      • length: number
      • sampleRate: number

      Returns AudioBuffer

    • The createChannelMerger() method of the BaseAudioContext interface creates a ChannelMergerNode, which combines channels from multiple audio streams into a single audio stream.

      MDN Reference

      Parameters

      • OptionalnumberOfInputs: number

      Returns ChannelMergerNode

    • The createChannelSplitter() method of the BaseAudioContext Interface is used to create a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.

      MDN Reference

      Parameters

      • OptionalnumberOfOutputs: number

      Returns ChannelSplitterNode

    • The createConvolver() method of the BaseAudioContext interface creates a ConvolverNode, which is commonly used to apply reverb effects to your audio.

      MDN Reference

      Returns ConvolverNode

    • The createDelay() method of the which is used to delay the incoming audio signal by a certain amount of time.

      MDN Reference

      Parameters

      • OptionalmaxDelayTime: number

      Returns DelayNode

    • The createGain() method of the BaseAudioContext interface creates a GainNode, which can be used to control the overall gain (or volume) of the audio graph.

      MDN Reference

      Returns GainNode

    • The createIIRFilter() method of the BaseAudioContext interface creates an IIRFilterNode, which represents a general infinite impulse response (IIR) filter which can be configured to serve as various types of filter.

      MDN Reference

      Parameters

      • feedforward: number[]
      • feedback: number[]

      Returns IIRFilterNode

    • The createIIRFilter() method of the BaseAudioContext interface creates an IIRFilterNode, which represents a general infinite impulse response (IIR) filter which can be configured to serve as various types of filter.

      MDN Reference

      Parameters

      Returns IIRFilterNode

    • The createMediaElementSource() method of the AudioContext Interface is used to create a new MediaElementAudioSourceNode object, given an existing HTML audio or video element, the audio from which can then be played and manipulated.

      MDN Reference

      Parameters

      Returns MediaElementAudioSourceNode

    • The createMediaStreamDestination() method of the AudioContext Interface is used to create a new MediaStreamAudioDestinationNode object associated with a WebRTC MediaStream representing an audio stream, which may be stored in a local file or sent to another computer.

      MDN Reference

      Returns MediaStreamAudioDestinationNode

    • The createMediaStreamSource() method of the AudioContext Interface is used to create a new MediaStreamAudioSourceNode object, given a media stream (say, from a MediaDevices.getUserMedia instance), the audio from which can then be played and manipulated.

      MDN Reference

      Parameters

      Returns MediaStreamAudioSourceNode

    • The createPanner() method of the BaseAudioContext Interface is used to create a new PannerNode, which is used to spatialize an incoming audio stream in 3D space.

      MDN Reference

      Returns PannerNode

    • The createScriptProcessor() method of the BaseAudioContext interface creates a ScriptProcessorNode used for direct audio processing.

      Parameters

      • OptionalbufferSize: number
      • OptionalnumberOfInputChannels: number
      • OptionalnumberOfOutputChannels: number

      Returns ScriptProcessorNode

      MDN Reference

    • The dispatchEvent() method of the EventTarget sends an Event to the object, (synchronously) invoking the affected event listeners in the appropriate order.

      MDN Reference

      Parameters

      Returns boolean

    • The getOutputTimestamp() method of the containing two audio timestamp values relating to the current audio context.

      MDN Reference

      Returns AudioTimestamp

    • Type Parameters

      • K extends "statechange"

      Parameters

      Returns void

    • Parameters

      Returns void

    • The resume() method of the AudioContext interface resumes the progression of time in an audio context that has previously been suspended.

      MDN Reference

      Returns Promise<void>

    • The suspend() method of the AudioContext Interface suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process — this is useful if you want an application to power down the audio hardware when it will not be using an audio context for a while.

      MDN Reference

      Returns Promise<void>