HaishinKit for iOS, macOS, tvOS, visionOS and Android.
- Camera and Microphone streaming library via RTMP and SRT for iOS, macOS, tvOS and visionOS.
- README.md contains unreleased content, which can be tested on the main branch.
- API Documentation
Sponsored with 💖 by
Enterprise Grade APIs for Feeds & Chat. Try the iOS Chat tutorial 💬
💬 Communication
- If you need help with making LiveStreaming requests using HaishinKit, use a GitHub Discussions with Q&A.
- If you’d like to discuss a feature request, use a GitHub Discussions with Idea
- If you met a HaishinKit’s bug🐛, use a GitHub Issue with Bug report template
- The trace level log is very useful. Please set
LBLogger.with(HaishinKitIdentifier).level = .trace
. - If you don’t use an issue template. I will immediately close the your issue without a comment.
- The trace level log is very useful. Please set
- If you want to contribute, submit a pull request with a pr template.
- If you want to support e-mail based communication without GitHub.
- Consulting fee is $50/1 incident. I’m able to response a few days.
- Discord chatroom.
- 日本語が分かる方は、日本語でのコミニケーションをお願いします!
💖 Sponsors
🌏 Related projects
Project name | Notes | License |
---|---|---|
HaishinKit for Android. | Camera and Microphone streaming library via RTMP for Android. | BSD 3-Clause “New” or “Revised” License |
HaishinKit for Flutter. | Camera and Microphone streaming library via RTMP for Flutter. | BSD 3-Clause “New” or “Revised” License |
🎨 Features
RTMP
- [x] Authentication
- [x] Publish and Recording
- [x] Playback (Beta)
- [x] Adaptive bitrate streaming
- [ ] Action Message Format
- [x] AMF0
- [ ] AMF3
- [x] SharedObject
- [x] RTMPS
- [x] Native (RTMP over SSL/TLS)
- [x] Tunneled (RTMPT over SSL/TLS) (Technical Preview)
- [x] RTMPT (Technical Preview)
- [x] ReplayKit Live as a Broadcast Upload Extension
- [x] Enhanced RTMP
SRT(beta)
- [x] Publish and Recording (H264/AAC)
- [x] Playback(beta)
- [ ] mode
- [x] caller
- [ ] listener
- [ ] rendezvous
Multi Camera
Supports two camera video sources. A picture-in-picture display that shows the image of the secondary camera of the primary camera. Supports camera split display that displays horizontally and vertically.
Picture-In-Picture | Split |
---|---|
// If you want to use the multi-camera feature, please make sure stream.isMultiCamSessionEnabled = true. Before attachCamera or attachAudio.
stream.isMultiCamSessionEnabled = true
let back = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
stream.attachCamera(back, channel: 0) { _, error in
if let error {
logger.warn(error)
}
}
let front = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
stream.attachCamera(front, channel: 1) { videoUnit, error in
videoUnit?.isVideoMirrored = true
if let error {
logger.error(error)
}
}
Rendering
Features | PiPHKView | MTHKView |
---|---|---|
Engine | AVSampleBufferDisplayLayer | Metal |
Publish | ✔ | ✔ |
Playback | ✔ | ✔ |
VisualEffect | ✔ | ✔ |
MultiCamera | ✔ | ✔ |
PictureInPicture | ✔ |
Others
- [x] tvOS 17.0 for AVCaptureSession.
- [x] Support multitasking camera access.
- [x] Support “Allow app extension API only” option
🐾 Examples
Examples project are available for iOS with UIKit, iOS with SwiftUI, macOS and tvOS. Example macOS requires Apple Silicon mac.
- [x] Camera and microphone publish.
- [x] Playback
sh git clone https://github.com/shogo4405/HaishinKit.swift.git cd HaishinKit.swift carthage bootstrap --platform iOS,macOS,tvOS --use-xcframeworks open HaishinKit.xcodeproj
🌏 Requirements
Development
Version | Xcode | Swift |
---|---|---|
1.7.0+ | 15.0+ | 5.9+ |
1.6.0+ | 15.0+ | 5.8+ |
1.5.0+ | 14.0+ | 5.7+ |
OS
- | iOS | tvOS | macOS | visionOS | watchOS |
---|---|---|---|---|---|
HaishinKit | 12.0+ | 12.0+ | 10.13+ | 1.0+ | - |
SRTHaishinKit | 12.0+ | - | 13.0+ | - | - |
Cocoa Keys
Please contains Info.plist.
iOS 10.0+
- NSMicrophoneUsageDescription
- NSCameraUsageDescription
macOS 10.14+
- NSMicrophoneUsageDescription
- NSCameraUsageDescription
tvOS 17.0+
- NSMicrophoneUsageDescription
- NSCameraUsageDescription
🔧 Installation
HaishinKit has a multi-module configuration. If you want to use the SRT protocol, please use SRTHaishinKit. SRTHaishinKit supports SPM only.
| | HaishinKit | SRTHaishinKit |
| - | :- | :- |
| SPM | https://github.com/shogo4405/HaishinKit.swift | https://github.com/shogo4405/HaishinKit.swift |
| CocoaPods | source ‘https://github.com/CocoaPods/Specs.git’
use_frameworks!
def import_pods
pod 'HaishinKit’, ‘~> 1.6.0
end
target 'Your Target’ do
platform :ios, ‘12.0’
import_pods
end
| Not supported. |
| Carthage | github “shogo4405/HaishinKit.swift” ~> 1.6.0 | Not supported. |
🔧 Prerequisites
Make sure you setup and activate your AVAudioSession iOS.
import AVFoundation
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
try session.setActive(true)
} catch {
print(error)
}
📓 RTMP Usage
Ingest
let connection = RTMPConnection()
let stream = RTMPStream(connection: connection)
stream.attachAudio(AVCaptureDevice.default(for: .audio)) { error in
// print(error)
}
stream.attachCamera(AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back), channel: 0) { _, error in
if let error {
logger.warn(error)
}
}
let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(stream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("rtmp://localhost/appName/instanceName")
stream.publish("streamName")
Playback
let connection = RTMPConnection()
let stream = RTMPStream(connection: connection)
let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(stream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("rtmp://localhost/appName/instanceName")
stream.play("streamName")
Authentication
var connection = RTMPConnection()
connection.connect("rtmp://username:password@localhost/appName/instanceName")
📓 SRT Usage
Ingest
let connection = SRTConnection()
let stream = SRTStream(connection: connection)
stream.attachAudio(AVCaptureDevice.default(for: .audio)) { error in
// print(error)
}
stream.attachCamera(AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back), channel: 0) { _, error in
if let error {
logger.warn(error)
}
}
let hkView = HKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(rtmpStream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("srt://host:port?option=foo")
stream.publish()
Playback
let connection = SRTConnection()
let stream = SRTStream(connection: connection)
let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(rtmpStream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("srt://host:port?option=foo")
stream.play()
📓 Settings
📹 Capture
stream.frameRate = 30
stream.sessionPreset = AVCaptureSession.Preset.medium
/// Specifies the video capture settings.
let front = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
stream.attachCamera(front, channel: 0) { videoUnit, error in
videoUnit?.isVideoMirrored = true
videoUnit?.preferredVideoStabilizationMode = .standard
videoUnit?.colorFormat = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
}
🔊 AudioCodecSettings
When you specify the sampling rate, it will perform resampling. Additionally, in the case of multiple channels, downsampling can be applied.
stream.audioSettings = AudioCodecSettings(
bitRate: Int = 64 * 1000,
sampleRate: Float64 = 0,
channels: UInt32 = 0,
downmix: Bool = false,
channelMap: [Int]? = nil
)
🎥 VideoCodecSettings
stream.videoSettings = VideoCodecSettings(
videoSize: .init(width: 854, height: 480),
profileLevel: kVTProfileLevel_H264_Baseline_3_1 as String,
bitRate: 640 * 1000,
maxKeyFrameIntervalDuration: 2,
scalingMode: .trim,
bitRateMode: .average,
allowFrameReordering: nil,
isHardwareEncoderEnabled: true
)
⏺️ Recording
// Specifies the recording settings. 0" means the same of input.
stream.startRecording(self, settings: [
AVMediaType.audio: [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 0,
AVNumberOfChannelsKey: 0,
// AVEncoderBitRateKey: 128000,
],
AVMediaType.video: [
AVVideoCodecKey: AVVideoCodecH264,
AVVideoHeightKey: 0,
AVVideoWidthKey: 0,
/*
AVVideoCompressionPropertiesKey: [
AVVideoMaxKeyFrameIntervalDurationKey: 2,
AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline30,
AVVideoAverageBitRateKey: 512000
]
*/
]
])
📜 License
BSD-3-Clause