This is an XY problem. You want to fix the VLFs in your log file, not fill a transaction log so it overflows into a second log file, freeing up the first log file.
There are definitely ways to stabilize VLFs, even in an AG, even in full recovery, and without disrupting your log chain, creating a second log file, or sending a bunch of bogus transactions to your secondaries. Here is a long-winded but easily reproducible example.
First, let's make a database with less than optimal filegrowth for the log. This will generate some pretty bad VLF layout. Let's also take a backup so it isn't operating in pseudo-simple mode:
CREATE DATABASE abc
ON (name = N'abc_data', filename = N'c:\temp\abc.mdf')
LOG ON (name = N'abc_log', filename = N'c:\temp\abc.ldf',
size = 1024KB, filegrowth = 1024KB);
GO
ALTER DATABASE abc SET RECOVERY FULL;
GO
BACKUP DATABASE abc TO DISK = 'c:\temp\abc.bak' WITH INIT, COMPRESSION;
GO
Now, let's look at the VLF and file info (note that your metrics may differ slightly). Before adding any data:
SELECT current_vlf_size_mb, total_vlf_count, active_vlf_count,
avg_size_mb = total_log_size_mb/total_vlf_count , [used_log_%]
= (SELECT used_log_space_in_percent FROM abc.sys.dm_db_log_space_usage)
FROM sys.dm_db_log_stats(DB_ID(N'abc'));
current_vlf_size_mb total_vlf_count active_vlf_count avg_size_mb used_log_%
0.242187 4 1 0.24804675 49.6063
Let's create a table and generate a bunch of log activity to encourage some sub-optimal log growth (this is to simulate the problem in a new database, not to solve the problem you already have):
SET NOCOUNT ON;
GO
CREATE TABLE dbo.flarb(a int, x char(4020) not null default 'x');
GO
BEGIN TRANSACTION;
INSERT dbo.flarb(a) SELECT object_id FROM sys.all_objects;
COMMIT TRANSACTION;
BEGIN TRANSACTION;
INSERT dbo.flarb(a) SELECT object_id FROM sys.all_objects;
COMMIT TRANSACTION;
GO 500
After we add that data, let's run query 1 again (and also EXEC abc.sys.sp_helpfile;):
------------------- --------------- ---------------- ----------- ----------
current_vlf_size_mb total_vlf_count active_vlf_count avg_size_mb used_log_%
------------------- --------------- ---------------- ----------- ----------
1 11035 11032 0.997552532 99.97239
name size
abc_log 11272192 KB
Your data may vary depending on SQL Server version and how many objects are in your database, but order of magnitude should be similar. Let's see what happens when we try to shrink our log file:
DBCC SHRINKFILE(abc_log, 4000);
Running query 1 + sp_helpfile again:
------------------- --------------- ---------------- ----------- ----------
current_vlf_size_mb total_vlf_count active_vlf_count avg_size_mb used_log_%
------------------- --------------- ---------------- ----------- ----------
1 11033 11032 0.99755208 99.9913
name size
abc_log 11270144 KB
I mean, it shrank a little wee bit, but didn't have a meaningful impact on VLFs. Think about it - it can't shrink the log because the log hasn't been backed up, so all that data is still active and can't be removed. So let's try backing up the log, then running another shrink, and run the query again:
BACKUP LOG abc TO DISK = 'C:\temp\abc.bak.1.trn' WITH INIT, COMPRESSION;
DBCC SHRINKFILE(abc_log, 4000);
current_vlf_size_mb total_vlf_count active_vlf_count avg_size_mb used_log_%
0.242187 11033 6 0.997552088 3.1719937
name size
abc_log 11270144 KB
No good. While we got the log usage down to 3%, we still couldn't reduce the size of the log. Let's try again:
BACKUP LOG abc TO DISK = 'C:\temp\abc.bak.2.trn' WITH INIT, COMPRESSION;
DBCC SHRINKFILE(abc_log, 4000);
current_vlf_size_mb total_vlf_count active_vlf_count avg_size_mb used_log_%
0.265625 11033 5 0.997552088 3.1494207
name size
abc_log 4096000 KB
Getting closer! This time, the log file agreed to shrink, but the VLF count is still out of proportion. So, we'll try one more time:
BACKUP LOG abc TO DISK = 'C:\temp\abc.bak.3.trn' WITH INIT, COMPRESSION;
DBCC SHRINKFILE(abc_log, 4000);
current_vlf_size_mb total_vlf_count active_vlf_count avg_size_mb used_log_%
0.265625 4027 1 0.993293317 3.153424
name size
abc_log 4096000 KB
This time, it worked a little better. We got the size down, and the log % down, and VLFs are reduced, but still too high (and average size is too small).
So let's try halving the log file, then growing it in even, 8GB chunks. Using 64GB as an example; you may need more, or that may be ridiculous. But the point is to pick a size it's never exceeded before, because if you want to control VLFs, you're going to want to prevent it from auto-growing, and if it grew to 120GB before, it will grow to 120GB again (Lots of background here):
DBCC SHRINKFILE(abc_log, 1);
GO
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 8000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 16000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 24000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 32000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 40000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 48000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 56000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 64000MB);
-- should probably fix auto-growth here, too
Now, query 1 + sp_helpfile returns:
------------------- --------------- ---------------- ----------- ----------
current_vlf_size_mb total_vlf_count active_vlf_count avg_size_mb used_log_%
------------------- --------------- ---------------- ----------- ----------
0.265625 132 1 484.8484257 0.00710449
name size
abc_log 65536000 KB
Your results may vary - you may have 1000 VLFs, you may have 700, it can vary depending on how the shrink and growth operations work through the log. But you should be in a better place. You can keep experimenting with sizes that get you toward an optimal VLF count/size (say, 0.5 GB per VLF).
Remember these log backups were introduced into your log chain. So you should put them with your other log backups, just in case you need to recover, or do this right before your next full backup.
Also, you may wonder, why wasn't one log backup enough? Some background here.