|Age||Commit message (Collapse)||Author||Files||Lines|
This is required to deal with the increased traffic of a passive listener
Note that it break the 'auto-restart' of osmocon when active because
the bootloader will send the prompt at 115200 baud and we won't see it ...
Signed-off-by: Sylvain Munaut <tnt@246tNt.com>
Depends: libosmocore.git Idb89ba7bc7c129a6304a76900d17f47daf54d17d
Let's give a more human-readable decode of the TPU instructions,
naming the TSPACT pin names as well as the device_id/strobe.
TPU_DEBUG used to read from TPU RAM, which unfortunately seems rather
slow, so copying it over from there broke overall timing leading to
infamous "DSP Error 24" when TPU_DEBUG is enabled.
We need to make sure to allocte sufficient space to include
the 32bit frame number at the start of the TPU_DEBUG msgb.
I was quite confused why I constantly see a bit error rate reported
by gsm48_rr, while at the same time the actual L1CTL_DATA_IND did
all state num_biterr == 0.
So the log statement was broken ...
Both REQ and CNF share the same message structure, so we can
cheat a bit by changing the message type and sending it back.
Some commands, such as SETTA or SETPOWER, are expected to be sent
when the transceiver is powered on. We should not drop Uplink
bursts while waiting TRXC response.
For now it's easier to comment out the state check completely,
because the existing TRXC state machine is quite messy.
This way the layer1 can activate proper CBCH task and send us
CBCH block with proper RSL channel number, so they do not end
up being routed to LAPDm and rejected there.
We cannot blindly assume that CBCH is present on TS0/SDCCH4 before
decoding CBCH Channel Description in System Information Type 4.
The original code used simplified logic whereby it assumed that
Spansion flash means MG01GSMT and Samsung flash means MGCxGSMT.
However, there exist MGC2GSMT hw variants with Spansion S71PL032J
flash in them, thus it is necessary to check the complete device ID
rather than just the flash manufacturer ID to distinguish between
MG01GSMT with 8 MiB flash (S71PL064J) and MGCxGSMT with 4 MiB flash
(S71PL032J, K5A3281CTM or K5L3316CAM).
Distinguishing between 4 MiB and 8 MiB flash chip types is also
necessary in order to configure TIFFS reader for the correct FFS
location matching that used by the original firmware, which is
in turn necessary in order to read factory RF calibration values.
* Switch Calypso output CS4/ADD22 to ADD22 function as needed
in order to access the upper half of the flash on GTM900 hw
* Set WS=4 for safety - please refer to this technical article for
the underlying theory:
This change fixes one bug and one uncertainty:
Bug: Huawei defined Calypso GPIO 3 to be DTR input on this modem,
following TI's precedent from C-Sample and D-Sample platforms.
(Huawei's documentation calls the corresponding FPC interface pin
UART_DTR without even mentioning that it is actually wired to
Calypso GPIO 3 in the hardware.)
The previous code (erroneously copied from gta0x target which is
different in this regard) configured this GPIO to be an output,
creating a driver conflict.
Uncertainty: GPIOs 4, 6, 10, 11 and 12 power up as inputs, and
Huawei's official fw leaves them as such. But in the absence of
someone reverse-engineering a sacrificial GTM900 module by slicing
its PCB and imaging its copper layers and vias, we don't know if
these Calypso pins are simply unconnected like they are on Openmoko
devices (in which case they are floating inputs and should be
switched to driving dummy outputs), or if they are tied off in the
hardware in one way or another, in which case leaving them as inputs
On the reasoning that floating inputs are a lesser evil than driver
conflicts or shorted outputs, leave these GPIOs as inputs until
we gain better knowledge of this aspect of the hardware.
GTM900-B can share almost all calibration tables with GTA0x and FCDEV3B,
only the VCXO is significantly different.
We have new hardware targets that have appeared since the original
OS#3582 patch was created, namely Huawei GTM900-B and the upcoming
FreeCalypso Caramel2 board. These new targets need the same APC
offset as gta0x and fcdev3b (TI's original Leonardo value), they
have proper calibration records in their FFS (meaning that all
compiled-in numbers become no-effect placeholders), and their PA
tracts are similar enough to Openmoko/FCDEV3B to where even in the
absence of calibration OM/FC numbers are close enough. Thus most
of the tables in board/gta0x/rf_tables.c should be reusable by
these new targets.
However, these new targets have quite different VCXOs from Openmoko
and FCDEV3B, thus they need different AFC parameters. Thus we split
board/gta0x/afcparams.c from board/gta0x/rf_tables.c, making the
latter more reusable.
Since If6e212baeb10953129fb0d5253d263567f5e12d6, we can read the TIFFS
file-system, thus we can read and use the factory RF calibration values.
* Implement parsing of factory RF calibration values for Motorola C1xx,
Openmoko GTA0x, Pirelli DP-L10, and upcoming FCDEV3B targets.
* Remove the old Tx power level control code and tables, and replace
them with new logic that exactly matches what the official chipset
firmware (TI/FreeCalypso) does, using tables in TI/FreeCalypso
format. Compiled-in tables serve as a fallback and match each
target's respective original firmware.
* Use individual AFC slope values for different targets. The original
value was/is only correct for the Mot C1xx family, whereas
GTA0x/FCDEV3B and Pirelli DP-L10 need different values because
Openmoko's VCXO (copied on the FCDEV3B) and Pirelli's VCTCXO
are different from what Motorola used.
* Take the initial AFC DAC value for the FB search from factory
calibration records on those targets on which it has been
calibrated per unit at the factory.
* Use individual APC offset for different targets instead of
the hard-coded value. The Mot/Compal's and Pirelli's firmwares
(both heavily modified relative to TI) use different APC offset
settings: 32 for Compal and 0 for Pirelli, while Openmoko and
FreeCalypso devices use 48.
To make the situation about stdint.h even more complicated, this
toolchain doesn't anymore #define __int8_t_defined, which means
we again run into conflicting definitions :/
Let's try to use INT8_MAX as a key.
GIT_SHORTHASH is used by the recently introduced snake game.
button is pressed
I am not sure how other developers do this. There are probably better ways to
make testing faster but I kind of like it this way.
I just call the twl3025_power_off_now function when the power key is pressed.
When a dedicated channel is activated, in chan_nr2mf_task_mask()
we calculate a bitmask of the corresponding multi-frame tasks to
be enabled. Three logical kinds of the multi-frame tasks exist:
- primary (master) - the main burst processing task,
- secondary - additional burst processing task (optional),
- measurement - neighbour measurement task (optional),
By default, the primary task is set to MF_TASK_BCCH_NORM (0x00).
Due to a mistake, the secondary task has also been set to BCCH,
so when we switch to a dedicated mode, we also enable the BCCH.
This leads to a race condition between the multi-frame tasks,
when both primary and secondary ones read bursts from the DSP
at the same time, so the firmware hangs because of that:
nb_cmd(0) and rxnb.msg != NULL
BURST ID 2!=0 BURST ID 3!=1
This regression was introduced together with experimental PDCH
support . Let's use value -1 to indicate that the secondary
task is not set, and apply it properly.
Fixes:  I44531bbe8743c188cc5d4a6ca2a63000e41d6189
For more information, see 3GPP TS 44.014, sections:
- 5.1 "Single-slot TCH loops", and
- 8 "Message definitions and contents".
This feature has nothing to do with the Mobility Management, so
let's handle GSM48_PDISC_TEST messages in the Radio Resources
layer implementation (gsm48_mm.c -> gsm48_rr.c).
DATAMSG.gen_msg() does validete the message before encoding.
This reverts commit 6e1c82d29836496b20e0d826976d9e71b32493d8.
Unfortunately, solving one problem it introduced even more regressions.
TRX Toolkit is still backwards compatible with Python2, but Python3
does much better in terms of performance. Also, on Debian Stretch
that is used as a base for our Docker images, Python 2.7 is still
the default. Let's require Python3 in shebang.
In order to reflect the UL/DL delay caused by the premature burst
scheduling (a.k.a. 'fn-advance') in a virtual environment, the
Transceiver implementation now queues all to be transmitted bursts,
so they remain in the queue until the appropriate time of transmission.
The API user is supposed to call recv_data_msg() in order to obtain
a L12TRX message on the TRXD (data) inteface, so it gets queued by
this function. Then, to ensure the timeous transmission, the user
of this implementation needs to call clck_tick() on each TDMA
frame. Both functions are thread-safe (queue mutex).
In a multi-trx configuration, the use of queue additionally ensures
proper burst aggregation on multiple TRXD connections, so all L12TRX
messages are guaranteed to be sent in the right order, i.e. with
monolithically-increasing TDMA frame numbers.
Of course, this change increases the overall CPU usage, given that
each transceiver gets its own queue, and we need to serve them all
on every TDMA frame. According to my measurements, when running
test cases from ttcn3-bts-test, the average load is ~50% higher
than what it used to be. Still not significantly high, though.
Related: OS#4658, OS#4546
Running with cProfile shows that there are quite a lot calls:
469896 0.254 0.000 0.254 0.000 trx_list.py:37(__getitem__)
Let's better avoid using it in performance critical parts.
In general, premature scheduling of to be transmitted bursts
inevitably increases the time delay between Uplink and Downlink.
The more we advance TDMA frame number, the greater gets this
delay. 20 TDMA frames is definitely more than a regular
transceiver needs to pre-process a burst before transmission.
When running together with fake_trx.py (mostly used back-end), it
is currently possible that Downlink bursts are received in a wrong
order if more than one transceiver is configured (multi-trx mode).
This is how it looks like:
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=629 rssi=-86 toa=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=629 ts=3 bid=1
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=630 rssi=-86 toa=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=630 ts=3 bid=2
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=631 rssi=-86 toa=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=631 ts=3 bid=3
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=633 (!) rssi=-86 toa=0
DSCHD NOTICE sched_trx.c:663 Substituting (!) lost TDMA frame 632 on TCH/F
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=632 ts=3 bid=0
DSCHD DEBUG sched_lchan_tchf.c:60 Traffic received on TCH/F: fn=633 ts=3 bid=1
DTRXD DEBUG trx_if.c:612 RX burst tn=3 fn=632 (!) rssi=-86 toa=0
DTRXD NOTICE sched_trx.c:640 Too many (>104) contiguous TDMA frames elapsed (2715647)
since the last processed fn=633 (current fn=632)
so here a burst with TDMA fn=633 was received earlier than a burst
with TDMA fn=632. The burst loss detection logic considered the
latter one as lost, and substituted it with a dummy burst. When
finally the out-of-order burst with TDMA fn=632 was received, we
got the large number of allegedly elapsed frames:
((632 + 2715648) - 633) % 2715648 == 2715647
Given that late bursts get substituted, the best thing we can do
is to reject them and log an error. Passing them to the logical
channel handler (again) might lead to undefined behaviour.
Related: OS#4658, OS#4546
It's not something that we should be trying to fix, if the whole
TDMA multi-frame is lost. For some yet unknown reason, sometimes
the difference between the last processed TDMA frame number and
the current one is so huge, so trxcon eats a lot of CPU trying
to compensate nearly the whole TDMA hyper-frame:
sched_trx.c:640 Too many (>104) contiguous TDMA frames elapsed (2715647)
since the last processed fn=633 (current fn=632)
Let's just print a warning and do not compensate more than one
TDMA multi-frame period corresponding to the current layout.